首页 > 最新文献

IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision最新文献

英文 中文
CSAM: A 2.5D Cross-Slice Attention Module for Anisotropic Volumetric Medical Image Segmentation. CSAM:用于各向异性容积医学图像分割的 2.5D Cross-Slice Attention 模块。
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00582
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Xiaoxi Du, Kaifeng Pang, Qi Miao, Steven S Raman, Demetri Terzopoulos, Kyunghyun Sung

A large portion of volumetric medical data, especially magnetic resonance imaging (MRI) data, is anisotropic, as the through-plane resolution is typically much lower than the in-plane resolution. Both 3D and purely 2D deep learning-based segmentation methods are deficient in dealing with such volumetric data since the performance of 3D methods suffers when confronting anisotropic data, and 2D methods disregard crucial volumetric information. Insufficient work has been done on 2.5D methods, in which 2D convolution is mainly used in concert with volumetric information. These models focus on learning the relationship across slices, but typically have many parameters to train. We offer a Cross-Slice Attention Module (CSAM) with minimal trainable parameters, which captures information across all the slices in the volume by applying semantic, positional, and slice attention on deep feature maps at different scales. Our extensive experiments using different network architectures and tasks demonstrate the usefulness and generalizability of CSAM. Associated code is available at https://github.com/aL3x-O-o-Hung/CSAM.

体积医学数据,尤其是磁共振成像(MRI)数据,有很大一部分是各向异性的,因为通面分辨率通常比平面内分辨率低得多。基于三维和纯二维深度学习的分割方法在处理此类容积数据时都存在不足,因为三维方法在面对各向异性数据时性能会受到影响,而二维方法则会忽略关键的容积信息。目前在 2.5D 方法方面的研究还不够,在 2.5D 方法中,二维卷积主要与体积信息结合使用。这些模型侧重于学习各切片之间的关系,但通常有许多参数需要训练。我们提供的跨切片注意力模块(Cross-Slice Attention Module,CSAM)只需最少的可训练参数,通过在不同尺度的深度特征图上应用语义、位置和切片注意力来捕捉容积中所有切片的信息。我们使用不同的网络架构和任务进行了大量实验,证明了 CSAM 的实用性和通用性。相关代码见 https://github.com/aL3x-O-o-Hung/CSAM。
{"title":"CSAM: A 2.5D Cross-Slice Attention Module for Anisotropic Volumetric Medical Image Segmentation.","authors":"Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Xiaoxi Du, Kaifeng Pang, Qi Miao, Steven S Raman, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.1109/wacv57701.2024.00582","DOIUrl":"10.1109/wacv57701.2024.00582","url":null,"abstract":"<p><p>A large portion of volumetric medical data, especially magnetic resonance imaging (MRI) data, is anisotropic, as the through-plane resolution is typically much lower than the in-plane resolution. Both 3D and purely 2D deep learning-based segmentation methods are deficient in dealing with such volumetric data since the performance of 3D methods suffers when confronting anisotropic data, and 2D methods disregard crucial volumetric information. Insufficient work has been done on 2.5D methods, in which 2D convolution is mainly used in concert with volumetric information. These models focus on learning the relationship across slices, but typically have many parameters to train. We offer a Cross-Slice Attention Module (CSAM) with minimal trainable parameters, which captures information across all the slices in the volume by applying semantic, positional, and slice attention on deep feature maps at different scales. Our extensive experiments using different network architectures and tasks demonstrate the usefulness and generalizability of CSAM. Associated code is available at https://github.com/aL3x-O-o-Hung/CSAM.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"5911-5920"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349312/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ordinal Classification with Distance Regularization for Robust Brain Age Prediction. 利用距离正则化的序数分类法进行可靠的脑年龄预测
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00770
Jay Shah, Md Mahfuzur Rahman Siddiquee, Yi Su, Teresa Wu, Baoxin Li

Age is one of the major known risk factors for Alzheimer's Disease (AD). Detecting AD early is crucial for effective treatment and preventing irreversible brain damage. Brain age, a measure derived from brain imaging reflecting structural changes due to aging, may have the potential to identify AD onset, assess disease risk, and plan targeted interventions. Deep learning-based regression techniques to predict brain age from magnetic resonance imaging (MRI) scans have shown great accuracy recently. However, these methods are subject to an inherent regression to the mean effect, which causes a systematic bias resulting in an overestimation of brain age in young subjects and underestimation in old subjects. This weakens the reliability of predicted brain age as a valid biomarker for downstream clinical applications. Here, we reformulate the brain age prediction task from regression to classification to address the issue of systematic bias. Recognizing the importance of preserving ordinal information from ages to understand aging trajectory and monitor aging longitudinally, we propose a novel ORdinal Distance Encoded Regularization (ORDER) loss that incorporates the order of age labels, enhancing the model's ability to capture age-related patterns. Extensive experiments and ablation studies demonstrate that this framework reduces systematic bias, outperforms state-of-art methods by statistically significant margins, and can better capture subtle differences between clinical groups in an independent AD dataset. Our implementation is publicly available at https://github.com/jaygshah/Robust-Brain-Age-Prediction.

年龄是阿尔茨海默病(AD)的主要已知风险因素之一。早期发现阿尔茨海默病对于有效治疗和防止不可逆转的脑损伤至关重要。脑年龄是从反映衰老引起的结构变化的脑成像中得出的一种测量指标,它可能具有识别阿尔茨海默病发病、评估疾病风险和计划有针对性的干预措施的潜力。基于深度学习的回归技术从磁共振成像(MRI)扫描中预测脑年龄,最近已显示出很高的准确性。然而,这些方法受制于固有的平均回归效应,这会造成系统性偏差,导致高估年轻受试者的脑年龄,低估老年受试者的脑年龄。这就削弱了预测脑年龄作为下游临床应用的有效生物标志物的可靠性。在这里,我们将脑年龄预测任务从回归重新表述为分类,以解决系统性偏差问题。我们认识到保留年龄的顺序信息对于理解衰老轨迹和纵向监测衰老的重要性,因此提出了一种新的ORdinal Distance Encoded Regularization(ORDER)损失,它包含了年龄标签的顺序,增强了模型捕捉年龄相关模式的能力。广泛的实验和消融研究表明,这一框架减少了系统性偏差,在统计学上显著优于最先进的方法,并能更好地捕捉独立的注意力缺失症数据集中临床组之间的细微差别。我们的实现方法可在 https://github.com/jaygshah/Robust-Brain-Age-Prediction 公开获取。
{"title":"Ordinal Classification with Distance Regularization for Robust Brain Age Prediction.","authors":"Jay Shah, Md Mahfuzur Rahman Siddiquee, Yi Su, Teresa Wu, Baoxin Li","doi":"10.1109/wacv57701.2024.00770","DOIUrl":"https://doi.org/10.1109/wacv57701.2024.00770","url":null,"abstract":"<p><p>Age is one of the major known risk factors for Alzheimer's Disease (AD). Detecting AD early is crucial for effective treatment and preventing irreversible brain damage. Brain age, a measure derived from brain imaging reflecting structural changes due to aging, may have the potential to identify AD onset, assess disease risk, and plan targeted interventions. Deep learning-based regression techniques to predict brain age from magnetic resonance imaging (MRI) scans have shown great accuracy recently. However, these methods are subject to an inherent regression to the mean effect, which causes a systematic bias resulting in an overestimation of brain age in young subjects and underestimation in old subjects. This weakens the reliability of predicted brain age as a valid biomarker for downstream clinical applications. Here, we reformulate the brain age prediction task from regression to classification to address the issue of systematic bias. Recognizing the importance of preserving ordinal information from ages to understand aging trajectory and monitor aging longitudinally, we propose a novel ORdinal Distance Encoded Regularization (ORDER) loss that incorporates the order of age labels, enhancing the model's ability to capture age-related patterns. Extensive experiments and ablation studies demonstrate that this framework reduces systematic bias, outperforms state-of-art methods by statistically significant margins, and can better capture subtle differences between clinical groups in an independent AD dataset. Our implementation is publicly available at https://github.com/jaygshah/Robust-Brain-Age-Prediction.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"7867-7876"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PathLDM: Text conditioned Latent Diffusion Model for Histopathology. PathLDM:用于组织病理学的文本条件潜在扩散模型。
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00510
Srikar Yellapragada, Alexandros Graikos, Prateek Prasanna, Tahsin Kurc, Joel Saltz, Dimitris Samaras

To achieve high-quality results, diffusion models must be trained on large datasets. This can be notably prohibitive for models in specialized domains, such as computational pathology. Conditioning on labeled data is known to help in data-efficient model training. Therefore, histopathology reports, which are rich in valuable clinical information, are an ideal choice as guidance for a histopathology generative model. In this paper, we introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images. Leveraging the rich contextual information provided by pathology text reports, our approach fuses image and textual data to enhance the generation process. By utilizing GPT's capabilities to distill and summarize complex text reports, we establish an effective conditioning mechanism. Through strategic conditioning and necessary architectural enhancements, we achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.

为了获得高质量的结果,扩散模型必须在大型数据集上进行训练。对于计算病理学等专业领域的模型来说,这显然是难以实现的。众所周知,以标注数据为条件有助于提高模型训练的数据效率。因此,组织病理学报告富含宝贵的临床信息,是指导组织病理学生成模型的理想选择。在本文中,我们介绍了 PathLDM,它是首个为生成高质量组织病理学图像而量身定制的文本条件潜在扩散模型。利用病理文本报告提供的丰富上下文信息,我们的方法融合了图像和文本数据,以增强生成过程。通过利用 GPT 对复杂文本报告进行提炼和总结的功能,我们建立了一种有效的调节机制。通过策略性调节和必要的架构增强,我们在 TCGA-BRCA 数据集上的文本到图像生成中取得了 7.64 的 SoTA FID 分数,大大超过了最接近的文本调节竞争者 30.1 的 FID 分数。
{"title":"PathLDM: Text conditioned Latent Diffusion Model for Histopathology.","authors":"Srikar Yellapragada, Alexandros Graikos, Prateek Prasanna, Tahsin Kurc, Joel Saltz, Dimitris Samaras","doi":"10.1109/wacv57701.2024.00510","DOIUrl":"10.1109/wacv57701.2024.00510","url":null,"abstract":"<p><p>To achieve high-quality results, diffusion models must be trained on large datasets. This can be notably prohibitive for models in specialized domains, such as computational pathology. Conditioning on labeled data is known to help in data-efficient model training. Therefore, histopathology reports, which are rich in valuable clinical information, are an ideal choice as guidance for a histopathology generative model. In this paper, we introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images. Leveraging the rich contextual information provided by pathology text reports, our approach fuses image and textual data to enhance the generation process. By utilizing GPT's capabilities to distill and summarize complex text reports, we establish an effective conditioning mechanism. Through strategic conditioning and necessary architectural enhancements, we achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"5170-5179"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11131586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brainomaly: Unsupervised Neurologic Disease Detection Utilizing Unannotated T1-weighted Brain MR Images. 脑异常:利用未标注的 T1 加权脑 MR 图像进行无监督神经系统疾病检测
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00740
Md Mahfuzur Rahman Siddiquee, Jay Shah, Teresa Wu, Catherine Chong, Todd J Schwedt, Gina Dumkrieger, Simona Nikolova, Baoxin Li

Harnessing the power of deep neural networks in the medical imaging domain is challenging due to the difficulties in acquiring large annotated datasets, especially for rare diseases, which involve high costs, time, and effort for annotation. Unsupervised disease detection methods, such as anomaly detection, can significantly reduce human effort in these scenarios. While anomaly detection typically focuses on learning from images of healthy subjects only, real-world situations often present unannotated datasets with a mixture of healthy and diseased subjects. Recent studies have demonstrated that utilizing such unannotated images can improve unsupervised disease and anomaly detection. However, these methods do not utilize knowledge specific to registered neuroimages, resulting in a subpar performance in neurologic disease detection. To address this limitation, we propose Brainomaly, a GAN-based image-to-image translation method specifically designed for neurologic disease detection. Brainomaly not only offers tailored image-to-image translation suitable for neuroimages but also leverages unannotated mixed images to achieve superior neurologic disease detection. Additionally, we address the issue of model selection for inference without annotated samples by proposing a pseudo-AUC metric, further enhancing Brainomaly's detection performance. Extensive experiments and ablation studies demonstrate that Brainomaly outperforms existing state-of-the-art unsupervised disease and anomaly detection methods by significant margins in Alzheimer's disease detection using a publicly available dataset and headache detection using an institutional dataset. The code is available from https://github.com/mahfuzmohammad/Brainomaly.

在医学影像领域利用深度神经网络的威力具有挑战性,因为很难获得大型注释数据集,特别是罕见疾病,这涉及高成本、时间和注释工作。无监督疾病检测方法(如异常检测)可以大大减少这些场景中的人力投入。异常检测通常只侧重于从健康受试者的图像中学习,而现实世界中经常出现健康受试者和患病受试者混合的未标注数据集。最近的研究表明,利用这类未标注的图像可以改进无监督疾病和异常检测。然而,这些方法并没有利用注册神经图像的特定知识,因此在神经疾病检测方面表现不佳。为了解决这一局限性,我们提出了 Brainomaly,一种基于 GAN 的图像到图像转换方法,专门用于神经疾病检测。Brainomaly 不仅能提供适合神经图像的定制图像到图像转换,还能利用未标注的混合图像实现出色的神经疾病检测。此外,我们还通过提出一种伪 AUC 指标,解决了无注释样本推断的模型选择问题,进一步提高了 Brainomaly 的检测性能。广泛的实验和消融研究表明,在使用公开数据集检测阿尔茨海默病和使用机构数据集检测头痛方面,Brainomaly 的性能明显优于现有的最先进的无监督疾病和异常检测方法。代码可从 https://github.com/mahfuzmohammad/Brainomaly 获取。
{"title":"Brainomaly: Unsupervised Neurologic Disease Detection Utilizing Unannotated T1-weighted Brain MR Images.","authors":"Md Mahfuzur Rahman Siddiquee, Jay Shah, Teresa Wu, Catherine Chong, Todd J Schwedt, Gina Dumkrieger, Simona Nikolova, Baoxin Li","doi":"10.1109/wacv57701.2024.00740","DOIUrl":"10.1109/wacv57701.2024.00740","url":null,"abstract":"<p><p>Harnessing the power of deep neural networks in the medical imaging domain is challenging due to the difficulties in acquiring large annotated datasets, especially for rare diseases, which involve high costs, time, and effort for annotation. Unsupervised disease detection methods, such as anomaly detection, can significantly reduce human effort in these scenarios. While anomaly detection typically focuses on learning from images of healthy subjects only, real-world situations often present unannotated datasets with a mixture of healthy and diseased subjects. Recent studies have demonstrated that utilizing such unannotated images can improve unsupervised disease and anomaly detection. However, these methods do not utilize knowledge specific to registered neuroimages, resulting in a subpar performance in neurologic disease detection. To address this limitation, we propose Brainomaly, a GAN-based image-to-image translation method specifically designed for neurologic disease detection. Brainomaly not only offers tailored image-to-image translation suitable for neuroimages but also leverages unannotated mixed images to achieve superior neurologic disease detection. Additionally, we address the issue of model selection for inference without annotated samples by proposing a pseudo-AUC metric, further enhancing Brainomaly's detection performance. Extensive experiments and ablation studies demonstrate that Brainomaly outperforms existing state-of-the-art unsupervised disease and anomaly detection methods by significant margins in Alzheimer's disease detection using a publicly available dataset and headache detection using an institutional dataset. The code is available from https://github.com/mahfuzmohammad/Brainomaly.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"7558-7567"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140892793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-aware Video Representation for Few-shot Action Recognition. 语义感知视频表示法,用于少镜头动作识别
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00633
Yutao Tang, Benjamín Béjar, René Vidal

Recent work on action recognition leverages 3D features and textual information to achieve state-of-the-art performance. However, most of the current few-shot action recognition methods still rely on 2D frame-level representations, often require additional components to model temporal relations, and employ complex distance functions to achieve accurate alignment of these representations. In addition, existing methods struggle to effectively integrate textual semantics, some resorting to concatenation or addition of textual and visual features, and some using text merely as an additional supervision without truly achieving feature fusion and information transfer from different modalities. In this work, we propose a simple yet effective Semantic-Aware Few-Shot Action Recognition (SAFSAR) model to address these issues. We show that directly leveraging a 3D feature extractor combined with an effective feature-fusion scheme, and a simple cosine similarity for classification can yield better performance without the need of extra components for temporal modeling or complex distance functions. We introduce an innovative scheme to encode the textual semantics into the video representation which adaptively fuses features from text and video, and encourages the visual encoder to extract more semantically consistent features. In this scheme, SAFSAR achieves alignment and fusion in a compact way. Experiments on five challenging few-shot action recognition benchmarks under various settings demonstrate that the proposed SAFSAR model significantly improves the state-of-the-art performance.

最近的动作识别工作利用三维特征和文本信息实现了最先进的性能。然而,目前大多数的几帧动作识别方法仍然依赖于二维帧级表示,通常需要额外的组件来模拟时间关系,并采用复杂的距离函数来实现这些表示的精确对齐。此外,现有的方法很难有效地整合文本语义,有的方法只是将文本特征和视觉特征合并或添加,有的方法只是将文本作为一种额外的监督手段,而没有真正实现不同模态的特征融合和信息传递。在这项工作中,我们提出了一种简单而有效的语义感知少镜头动作识别(Semantic-Aware Few-Shot Action Recognition,SAFSAR)模型来解决这些问题。我们的研究表明,直接利用三维特征提取器,结合有效的特征融合方案和简单的余弦相似性进行分类,可以获得更好的性能,而无需额外的时间建模组件或复杂的距离函数。我们引入了一种将文本语义编码到视频表示中的创新方案,它能自适应地融合文本和视频中的特征,并鼓励视觉编码器提取更多语义一致的特征。在这一方案中,SAFSAR 以一种紧凑的方式实现了对齐和融合。在不同设置下对五个具有挑战性的少镜头动作识别基准进行的实验表明,所提出的 SAFSAR 模型显著提高了最先进的性能。
{"title":"Semantic-aware Video Representation for Few-shot Action Recognition.","authors":"Yutao Tang, Benjamín Béjar, René Vidal","doi":"10.1109/wacv57701.2024.00633","DOIUrl":"10.1109/wacv57701.2024.00633","url":null,"abstract":"<p><p>Recent work on action recognition leverages 3D features and textual information to achieve state-of-the-art performance. However, most of the current few-shot action recognition methods still rely on 2D frame-level representations, often require additional components to model temporal relations, and employ complex distance functions to achieve accurate alignment of these representations. In addition, existing methods struggle to effectively integrate textual semantics, some resorting to concatenation or addition of textual and visual features, and some using text merely as an additional supervision without truly achieving feature fusion and information transfer from different modalities. In this work, we propose a simple yet effective <b>S</b>emantic-<b>A</b>ware <b>F</b>ew-<b>S</b>hot <b>A</b>ction <b>R</b>ecognition (<b>SAFSAR</b>) model to address these issues. We show that directly leveraging a 3D feature extractor combined with an effective feature-fusion scheme, and a simple cosine similarity for classification can yield better performance without the need of extra components for temporal modeling or complex distance functions. We introduce an innovative scheme to encode the textual semantics into the video representation which adaptively fuses features from text and video, and encourages the visual encoder to extract more semantically consistent features. In this scheme, SAFSAR achieves alignment and fusion in a compact way. Experiments on five challenging few-shot action recognition benchmarks under various settings demonstrate that the proposed SAFSAR model significantly improves the state-of-the-art performance.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"6444-6454"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11337110/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain Generalization with Correlated Style Uncertainty. 具有相关风格不确定性的领域泛化。
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00200
Zheyuan Zhang, Bin Wang, Debesh Jha, Ugur Demir, Ulas Bagci

Domain generalization (DG) approaches intend to extract domain invariant features that can lead to a more robust deep learning model. In this regard, style augmentation is a strong DG method taking advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. While it is one of the state-of-the-art methods, prior works on style augmentation have either disregarded the interdependence amongst distinct feature channels or have solely constrained style augmentation to linear interpolation. To address these research gaps, in this work, we introduce a novel augmentation approach, named Correlated Style Uncertainty (CSU), surpassing the limitations of linear interpolation in style statistic space and simultaneously preserving vital correlation information. Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks: PACS, Office-Home, and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The results showcase a remarkable improvement margin over existing state-of-the-art techniques. The source code is available https://github.com/freshman97/CSU.

领域泛化(DG)方法旨在提取领域不变特征,从而建立更强大的深度学习模型。在这方面,风格增强是一种强大的 DG 方法,它利用包含信息风格特征的特定实例特征统计数据来合成新领域。虽然它是最先进的方法之一,但之前关于风格增强的研究要么忽略了不同特征通道之间的相互依存关系,要么将风格增强仅仅局限于线性插值。为了弥补这些研究空白,我们在这项工作中引入了一种名为 "相关风格不确定性"(CSU)的新型增强方法,它超越了线性插值在风格统计空间中的局限性,同时保留了重要的相关信息。通过在各种跨领域计算机视觉和医学影像分类任务中的广泛实验,我们证明了这种方法的有效性:PACS、Office-Home 和 Camelyon17 数据集,以及 Duke-Market1501 实例检索任务。结果表明,与现有的最先进技术相比,该技术的改进幅度非常明显。源代码见 https://github.com/freshman97/CSU。
{"title":"Domain Generalization with Correlated Style Uncertainty.","authors":"Zheyuan Zhang, Bin Wang, Debesh Jha, Ugur Demir, Ulas Bagci","doi":"10.1109/wacv57701.2024.00200","DOIUrl":"10.1109/wacv57701.2024.00200","url":null,"abstract":"<p><p>Domain generalization (DG) approaches intend to extract domain invariant features that can lead to a more robust deep learning model. In this regard, style augmentation is a strong DG method taking advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. While it is one of the state-of-the-art methods, prior works on style augmentation have either disregarded the interdependence amongst distinct feature channels or have solely constrained style augmentation to linear interpolation. To address these research gaps, in this work, we introduce a novel augmentation approach, named Correlated Style Uncertainty (CSU), surpassing the limitations of linear interpolation in style statistic space and simultaneously preserving vital correlation information. Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks: PACS, Office-Home, and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The results showcase a remarkable improvement margin over existing state-of-the-art techniques. The source code is available https://github.com/freshman97/CSU.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"1989-1998"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11230655/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmentation by Counterfactual Explanation - Fixing an Overconfident Classifier. 反事实解释的扩充——修复一个过于自信的分类器。
Pub Date : 2023-01-01 Epub Date: 2023-02-06 DOI: 10.1109/wacv56688.2023.00470
Sumedha Singla, Nihal Murali, Forough Arabshahi, Sofia Triantafyllou, Kayhan Batmanghelich

A highly accurate but overconfident model is ill-suited for deployment in critical applications such as healthcare and autonomous driving. The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. The model should also refrain from making overconfident decisions on samples that lie far outside its training distribution, far-out-of-distribution (far-OOD), or on unseen samples from novel classes that lie near its training distribution (near-OOD). This paper proposes an application of counterfactual explanations in fixing an over-confident classifier. Specifically, we propose to fine-tune a given pre-trained classifier using augmentations from a counterfactual explainer (ACE) to fix its uncertainty characteristics while retaining its predictive performance. We perform extensive experiments with detecting far-OOD, near-OOD, and ambiguous samples. Our empirical results show that the revised model have improved uncertainty measures, and its performance is competitive to the state-of-the-art methods.

高度准确但过于自信的模型不适合部署在医疗保健和自动驾驶等关键应用中。分类结果应反映出接近决策边界的分布中模糊样本的高度不确定性。该模型还应避免对远离其训练分布、远离分布(远OOD)的样本或来自靠近其训练分布(近OOD)新类的看不见的样本做出过于自信的决定。本文提出了反事实解释在固定过度自信分类器中的应用。具体来说,我们建议使用反事实解释器(ACE)的增强来微调给定的预训练分类器,以固定其不确定性特征,同时保持其预测性能。我们对检测远OOD、近OOD和模糊样本进行了大量实验。我们的实证结果表明,修正后的模型改进了不确定性度量,其性能与最先进的方法相比具有竞争力。
{"title":"Augmentation by Counterfactual Explanation - Fixing an Overconfident Classifier.","authors":"Sumedha Singla,&nbsp;Nihal Murali,&nbsp;Forough Arabshahi,&nbsp;Sofia Triantafyllou,&nbsp;Kayhan Batmanghelich","doi":"10.1109/wacv56688.2023.00470","DOIUrl":"10.1109/wacv56688.2023.00470","url":null,"abstract":"<p><p>A highly accurate but overconfident model is ill-suited for deployment in critical applications such as healthcare and autonomous driving. The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. The model should also refrain from making overconfident decisions on samples that lie far outside its training distribution, far-out-of-distribution (far-OOD), or on unseen samples from novel classes that lie near its training distribution (near-OOD). This paper proposes an application of counterfactual explanations in fixing an over-confident classifier. Specifically, we propose to fine-tune a given pre-trained classifier using augmentations from a counterfactual explainer (ACE) to fix its uncertainty characteristics while retaining its predictive performance. We perform extensive experiments with detecting far-OOD, near-OOD, and ambiguous samples. Our empirical results show that the revised model have improved uncertainty measures, and its performance is competitive to the state-of-the-art methods.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2023 ","pages":"4709-4719"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10506513/pdf/nihms-1915803.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10313085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape Correspondence. 稠密形状对应的各向异性多尺度图卷积网络。
Mohammad Farazi, Wenhui Zhu, Zhangsihao Yang, Yalin Wang

This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics. We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features. The proposed framework has a U-Net model as the primary node feature extractor, followed by a successive spectral-based graph convolutional network. To create a diverse set of filters, we use anisotropic wavelet basis filters, being sensitive to both different directions and band-passes. This filter set overcomes the common over-smoothing behavior of conventional graph neural networks. To further improve the model's performance, we add a function that perturbs the feature maps in the last layer ahead of fully connected layers, forcing the network to learn more discriminative features overall. The resulting correspondence maps show state-of-the-art performance on the benchmark datasets based on average geodesic errors and superior robustness to discretization in 3D meshes. Our approach provides new insights and practical solutions to the dense shape correspondence research.

本文研究了三维密集形状对应,这是计算机视觉和图形学中一个关键的形状分析应用。我们介绍了一种新的混合几何深度学习模型,该模型学习几何上有意义和离散无关的特征。该框架以U-Net模型作为主要节点特征提取器,其次是连续的基于频谱的图卷积网络。为了创建一组不同的滤波器,我们使用各向异性小波基滤波器,对不同的方向和带通都很敏感。该滤波集克服了传统图神经网络常见的过平滑行为。为了进一步提高模型的性能,我们增加了一个函数,在完全连接层之前扰动最后一层的特征映射,迫使网络整体学习更多的判别特征。由此产生的对应图在基于平均测地线误差的基准数据集上显示了最先进的性能,并且对3D网格中的离散化具有优越的鲁棒性。我们的方法为密集形状对应的研究提供了新的见解和实用的解决方案。
{"title":"Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape Correspondence.","authors":"Mohammad Farazi,&nbsp;Wenhui Zhu,&nbsp;Zhangsihao Yang,&nbsp;Yalin Wang","doi":"10.1109/wacv56688.2023.00316","DOIUrl":"https://doi.org/10.1109/wacv56688.2023.00316","url":null,"abstract":"<p><p>This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics. We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features. The proposed framework has a U-Net model as the primary node feature extractor, followed by a successive spectral-based graph convolutional network. To create a diverse set of filters, we use anisotropic wavelet basis filters, being sensitive to both different directions and band-passes. This filter set overcomes the common over-smoothing behavior of conventional graph neural networks. To further improve the model's performance, we add a function that perturbs the feature maps in the last layer ahead of fully connected layers, forcing the network to learn more discriminative features overall. The resulting correspondence maps show state-of-the-art performance on the benchmark datasets based on average geodesic errors and superior robustness to discretization in 3D meshes. Our approach provides new insights and practical solutions to the dense shape correspondence research.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2023 ","pages":"3145-3154"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10448951/pdf/nihms-1845628.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10101390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances. 关注谁是弱者:复杂和隐式失衡下的剪枝辅助医学图像定位。
Ajay Jaiswal, Tianlong Chen, Justin F Rousseau, Yifan Peng, Ying Ding, Zhangyang Wang

Deep neural networks (DNNs) have rapidly become a de facto choice for medical image understanding tasks. However, DNNs are notoriously fragile to the class imbalance in image classification. We further point out that such imbalance fragility can be amplified when it comes to more sophisticated tasks such as pathology localization, as imbalances in such problems can have highly complex and often implicit forms of presence. For example, different pathology can have different sizes or colors (w.r.t.the background), different underlying demographic distributions, and in general different difficulty levels to recognize, even in a meticulously curated balanced distribution of training data. In this paper, we propose to use pruning to automatically and adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly, during training in supervised, semi-supervised, and weakly-supervised settings. Our main inspiration is drawn from the recent finding that deep classification models have difficult-to-memorize samples and those may be effectively exposed through network pruning [15] - and we extend such observation beyond classification for the first time. We also present an interesting demographic analysis which illustrates HTLs ability to capture complex demographic imbalances. Our extensive experiments on the Skin Lesion Localization task in multiple training settings by paying additional attention to HTLs show significant improvement of localization performance by ~2-3%.

深度神经网络(dnn)已迅速成为医学图像理解任务的实际选择。然而,dnn在图像分类中极易受到类别不平衡的影响。我们进一步指出,当涉及到更复杂的任务(如病理定位)时,这种不平衡的脆弱性可能会被放大,因为这些问题中的不平衡可能具有高度复杂且通常隐含的存在形式。例如,不同的病理可以有不同的大小或颜色(w.r.t背景),不同的潜在人口分布,以及通常不同的识别难度,即使在精心策划的训练数据平衡分布中也是如此。在本文中,我们建议使用修剪来自动和自适应地识别难学习(html)训练样本,并通过在监督、半监督和弱监督设置的训练中明确地参与它们来提高病理定位。我们的主要灵感来自最近的一项发现,即深度分类模型具有难以记忆的样本,这些样本可以通过网络修剪有效地暴露出来[15],并且我们首次将这种观察扩展到分类之外。我们还提供了一个有趣的人口统计分析,说明了html捕捉复杂人口失衡的能力。我们在多个训练环境下对皮肤病变定位任务进行了广泛的实验,通过额外关注html,我们的定位性能显著提高了~2-3%。
{"title":"Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances.","authors":"Ajay Jaiswal,&nbsp;Tianlong Chen,&nbsp;Justin F Rousseau,&nbsp;Yifan Peng,&nbsp;Ying Ding,&nbsp;Zhangyang Wang","doi":"10.1109/wacv56688.2023.00496","DOIUrl":"https://doi.org/10.1109/wacv56688.2023.00496","url":null,"abstract":"<p><p>Deep neural networks (DNNs) have rapidly become a de facto choice for medical image understanding tasks. However, DNNs are notoriously fragile to the class imbalance in image classification. We further point out that such imbalance fragility can be amplified when it comes to more sophisticated tasks such as pathology localization, as imbalances in such problems can have highly complex and often implicit forms of presence. For example, different pathology can have different sizes or colors (w.r.t.the background), different underlying demographic distributions, and in general different difficulty levels to recognize, even in a meticulously curated balanced distribution of training data. In this paper, we propose to use pruning to automatically and adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly, during training in supervised, semi-supervised, and weakly-supervised settings. Our main inspiration is drawn from the recent finding that deep classification models have difficult-to-memorize samples and those may be effectively exposed through network pruning [15] - and we extend such observation beyond classification for the first time. We also present an interesting demographic analysis which illustrates HTLs ability to capture complex demographic imbalances. Our extensive experiments on the Skin Lesion Localization task in multiple training settings by paying additional attention to HTLs show significant improvement of localization performance by ~2-3%.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2023 ","pages":"4976-4985"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10089697/pdf/nihms-1888485.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9314753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. VSGD-Net:组织病理图像上的虚拟染色引导黑色素细胞检测
Pub Date : 2023-01-01 Epub Date: 2023-02-06 DOI: 10.1109/wacv56688.2023.00196
Kechun Liu, Beibin Li, Wenjun Wu, Caitlin May, Oliver Chang, Stevan Knezevich, Lisa Reisch, Joann Elmore, Linda Shapiro

Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.

在诊断皮肤活检标本上的黑色素瘤及其前驱病变时,黑色素细胞的检测是评估黑色素细胞生长模式的关键前提。然而,由于在常规的苏木精和伊红(H&E)染色图像中黑色素细胞与其他细胞的视觉相似性,导致目前的细胞核检测方法失效,因此这种检测具有挑战性。Sox10等染色剂可以标记黑色素细胞,但它们需要额外的步骤和费用,因此在临床实践中并不常用。为了解决这些局限性,我们引入了 VSGD-Net,这是一种新型检测网络,可通过从 H&E 到 Sox10 的虚拟染色来学习黑色素细胞的识别。该方法在推理过程中只需要常规的 H&E 图像,从而为病理学家诊断黑色素瘤提供了一种很有前景的方法。据我们所知,这是第一项利用两种不同病理染色之间的图像合成特征来研究检测问题的研究。广泛的实验结果表明,在黑色素细胞检测方面,我们提出的模型优于最先进的细胞核检测方法。源代码和预训练模型可在以下网址获取:https://github.com/kechunl/VSGD-Net。
{"title":"VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images.","authors":"Kechun Liu, Beibin Li, Wenjun Wu, Caitlin May, Oliver Chang, Stevan Knezevich, Lisa Reisch, Joann Elmore, Linda Shapiro","doi":"10.1109/wacv56688.2023.00196","DOIUrl":"10.1109/wacv56688.2023.00196","url":null,"abstract":"<p><p>Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2023 ","pages":"1918-1927"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9977454/pdf/nihms-1876466.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9136262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1