首页 > 最新文献

Neurocomputing最新文献

英文 中文
CoFiNet: Unveiling camouflaged objects with multi-scale finesse CoFiNet:多尺度巧妙揭开伪装物体的神秘面纱
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128763
Cunhan Guo , Heyan Huang
Camouflaged Object Detection (COD) is a critical aspect of computer vision aimed at identifying concealed objects, with applications spanning military, industrial, medical and monitoring domains. To address the problem of poor detail segmentation effect, we introduce a novel method for camouflaged object detection, named CoFiNet. Our approach primarily focuses on multi-scale feature fusion and extraction, with special attention to the model’s segmentation effectiveness for detailed features, enhancing its ability to effectively detect camouflaged objects. CoFiNet adopts a coarse-to-fine strategy. A multi-scale feature integration module is laveraged to enhance the model’s capability of fusing context feature. A multi-activation selective kernel module is leveraged to grant the model the ability to autonomously alter its receptive field, enabling it to selectively choose an appropriate receptive field for camouflaged objects of different sizes. During mask generation, we employ the dual-mask strategy for image segmentation, separating the reconstruction of coarse and fine masks, which significantly enhances the model’s learning capacity for details. Comprehensive experiments were conducted on four different datasets, demonstrating that CoFiNet achieves state-of-the-art performance across all datasets. The experiment results of CoFiNet underscore its effectiveness in camouflaged object detection and highlight its potential in various practical application scenarios.
伪装物体检测(COD)是计算机视觉的一个重要方面,其目的是识别隐藏的物体,应用范围涵盖军事、工业、医疗和监控领域。为了解决细节分割效果不佳的问题,我们引入了一种新的伪装物体检测方法,命名为 CoFiNet。我们的方法主要侧重于多尺度特征融合和提取,并特别关注模型对细节特征的分割效果,从而增强其有效检测伪装物体的能力。CoFiNet 采用了从粗到细的策略。平均化的多尺度特征整合模块增强了模型融合上下文特征的能力。利用多激活选择内核模块,赋予模型自主改变感受野的能力,使其能够为不同大小的伪装物体选择合适的感受野。在生成遮罩时,我们采用双遮罩策略进行图像分割,将粗遮罩和细遮罩的重建分开,这大大增强了模型对细节的学习能力。我们在四个不同的数据集上进行了综合实验,结果表明 CoFiNet 在所有数据集上都取得了最先进的性能。CoFiNet 的实验结果证明了它在伪装物体检测中的有效性,并突出了它在各种实际应用场景中的潜力。
{"title":"CoFiNet: Unveiling camouflaged objects with multi-scale finesse","authors":"Cunhan Guo ,&nbsp;Heyan Huang","doi":"10.1016/j.neucom.2024.128763","DOIUrl":"10.1016/j.neucom.2024.128763","url":null,"abstract":"<div><div>Camouflaged Object Detection (COD) is a critical aspect of computer vision aimed at identifying concealed objects, with applications spanning military, industrial, medical and monitoring domains. To address the problem of poor detail segmentation effect, we introduce a novel method for camouflaged object detection, named CoFiNet. Our approach primarily focuses on multi-scale feature fusion and extraction, with special attention to the model’s segmentation effectiveness for detailed features, enhancing its ability to effectively detect camouflaged objects. CoFiNet adopts a coarse-to-fine strategy. A multi-scale feature integration module is laveraged to enhance the model’s capability of fusing context feature. A multi-activation selective kernel module is leveraged to grant the model the ability to autonomously alter its receptive field, enabling it to selectively choose an appropriate receptive field for camouflaged objects of different sizes. During mask generation, we employ the dual-mask strategy for image segmentation, separating the reconstruction of coarse and fine masks, which significantly enhances the model’s learning capacity for details. Comprehensive experiments were conducted on four different datasets, demonstrating that CoFiNet achieves state-of-the-art performance across all datasets. The experiment results of CoFiNet underscore its effectiveness in camouflaged object detection and highlight its potential in various practical application scenarios.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128763"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRISP: A cross-modal integration framework based on the surprisingly popular algorithm for multimodal named entity recognition CRISP:基于多模态命名实体识别惊人流行算法的跨模态整合框架
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128792
Haitao Liu , Xianwei Xin , Jihua Song , Weiming Peng
The multimodal named entity recognition task on social media involves recognizing named entities with textual and visual information, which is of great significance for information processing. Nevertheless, many existing models still face the following challenges. First, in the process of cross-modal interaction, the attention mechanism sometimes focuses on trivial parts in the images that are not relevant to entities, which not only neglects valuable information but also inevitably introduces visual noise. Second, the gate mechanism is widely used for filtering out visual information to reduce the influence of noise on text understanding. However, the gate mechanism neglects capturing fine-grained semantic relevance between modalities, which easily affects the filtration process. To address these issues, we propose a cross-modal integration framework based on the surprisingly popular algorithm, aiming at enhancing the integration of effective visual guidance and reducing the interference of irrelevant visual noise. Specifically, we design a dual-branch interaction module that includes the attention mechanism and the surprisingly popular algorithm, allowing the model to focus on valuable but overlooked parts in the images. Furthermore, we compute the matching degree between modalities at the multi-granularity level, using the Choquet integral to establish a more reasonable basis for filtering out visual noise. We have conducted extensive experiments on public datasets, and the experimental results demonstrate the advantages of our model.
社交媒体上的多模态命名实体识别任务涉及识别带有文本和视觉信息的命名实体,这对信息处理具有重要意义。然而,许多现有模型仍面临以下挑战。首先,在跨模态交互过程中,注意力机制有时会关注图像中与实体无关的琐碎部分,这不仅会忽略有价值的信息,还不可避免地会引入视觉噪声。其次,门机制被广泛用于过滤视觉信息,以减少噪声对文本理解的影响。然而,门机制忽略了捕捉模态之间细粒度的语义相关性,这很容易影响过滤过程。为了解决这些问题,我们提出了一种基于惊人算法的跨模态整合框架,旨在加强有效视觉引导的整合,减少无关视觉噪声的干扰。具体来说,我们设计了一个双分支交互模块,其中包括注意力机制和令人惊讶的流行算法,使模型能够关注图像中有价值但被忽视的部分。此外,我们在多粒度水平上计算模态之间的匹配度,利用乔奎特积分为过滤视觉噪声建立更合理的基础。我们在公共数据集上进行了大量实验,实验结果证明了我们模型的优势。
{"title":"CRISP: A cross-modal integration framework based on the surprisingly popular algorithm for multimodal named entity recognition","authors":"Haitao Liu ,&nbsp;Xianwei Xin ,&nbsp;Jihua Song ,&nbsp;Weiming Peng","doi":"10.1016/j.neucom.2024.128792","DOIUrl":"10.1016/j.neucom.2024.128792","url":null,"abstract":"<div><div>The multimodal named entity recognition task on social media involves recognizing named entities with textual and visual information, which is of great significance for information processing. Nevertheless, many existing models still face the following challenges. First, in the process of cross-modal interaction, the attention mechanism sometimes focuses on trivial parts in the images that are not relevant to entities, which not only neglects valuable information but also inevitably introduces visual noise. Second, the gate mechanism is widely used for filtering out visual information to reduce the influence of noise on text understanding. However, the gate mechanism neglects capturing fine-grained semantic relevance between modalities, which easily affects the filtration process. To address these issues, we propose a cross-modal integration framework based on the surprisingly popular algorithm, aiming at enhancing the integration of effective visual guidance and reducing the interference of irrelevant visual noise. Specifically, we design a dual-branch interaction module that includes the attention mechanism and the surprisingly popular algorithm, allowing the model to focus on valuable but overlooked parts in the images. Furthermore, we compute the matching degree between modalities at the multi-granularity level, using the Choquet integral to establish a more reasonable basis for filtering out visual noise. We have conducted extensive experiments on public datasets, and the experimental results demonstrate the advantages of our model.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128792"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A three-stage model for camouflaged object detection 伪装物体检测的三阶段模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128784
Tianyou Chen , Hui Ruan , Shaojie Wang , Jin Xiao , Xiaoguang Hu
Camouflaged objects are typically assimilated into their backgrounds and exhibit fuzzy boundaries. The complex environmental conditions and the high intrinsic similarity between camouflaged targets and their surroundings pose significant challenges in accurately locating and segmenting these objects in their entirety. While existing methods have demonstrated remarkable performance in various real-world scenarios, they still face limitations when confronted with difficult cases, such as small targets, thin structures, and indistinct boundaries. Drawing inspiration from human visual perception when observing images containing camouflaged objects, we propose a three-stage model that enables coarse-to-fine segmentation in a single iteration. Specifically, our model employs three decoders to sequentially process subsampled features, cropped features, and high-resolution original features. This proposed approach not only reduces computational overhead but also mitigates interference caused by background noise. Furthermore, considering the significance of multi-scale information, we have designed a multi-scale feature enhancement module that enlarges the receptive field while preserving detailed structural cues. Additionally, a boundary enhancement module has been developed to enhance performance by leveraging boundary information. Subsequently, a mask-guided fusion module is proposed to generate fine-grained results by integrating coarse prediction maps with high-resolution feature maps. Our network shows superior performance without introducing unnecessary complexities. Upon acceptance of the paper, the source code will be made publicly available at https://github.com/clelouch/TSNet.
伪装目标通常会被其背景所同化,并表现出模糊的边界。复杂的环境条件以及伪装目标与其周围环境之间的高度内在相似性,对准确定位和分割这些物体的整体构成了巨大挑战。虽然现有方法在各种真实世界场景中表现出了不俗的性能,但在面对小目标、薄结构和边界不清晰等困难情况时,这些方法仍然面临着局限性。我们从人类观察包含伪装物体的图像时的视觉感知中汲取灵感,提出了一种三阶段模型,可在一次迭代中实现从粗到细的分割。具体来说,我们的模型采用三个解码器依次处理子采样特征、裁剪特征和高分辨率原始特征。这种方法不仅能减少计算开销,还能减轻背景噪声造成的干扰。此外,考虑到多尺度信息的重要性,我们还设计了一个多尺度特征增强模块,在扩大感受野的同时保留详细的结构线索。此外,我们还开发了一个边界增强模块,通过利用边界信息来提高性能。随后,我们提出了一个掩膜引导融合模块,通过将粗略预测图与高分辨率特征图进行整合来生成精细结果。我们的网络在不引入不必要复杂性的情况下显示出卓越的性能。论文一经接受,源代码将在 https://github.com/clelouch/TSNet 上公开发布。
{"title":"A three-stage model for camouflaged object detection","authors":"Tianyou Chen ,&nbsp;Hui Ruan ,&nbsp;Shaojie Wang ,&nbsp;Jin Xiao ,&nbsp;Xiaoguang Hu","doi":"10.1016/j.neucom.2024.128784","DOIUrl":"10.1016/j.neucom.2024.128784","url":null,"abstract":"<div><div>Camouflaged objects are typically assimilated into their backgrounds and exhibit fuzzy boundaries. The complex environmental conditions and the high intrinsic similarity between camouflaged targets and their surroundings pose significant challenges in accurately locating and segmenting these objects in their entirety. While existing methods have demonstrated remarkable performance in various real-world scenarios, they still face limitations when confronted with difficult cases, such as small targets, thin structures, and indistinct boundaries. Drawing inspiration from human visual perception when observing images containing camouflaged objects, we propose a three-stage model that enables coarse-to-fine segmentation in a single iteration. Specifically, our model employs three decoders to sequentially process subsampled features, cropped features, and high-resolution original features. This proposed approach not only reduces computational overhead but also mitigates interference caused by background noise. Furthermore, considering the significance of multi-scale information, we have designed a multi-scale feature enhancement module that enlarges the receptive field while preserving detailed structural cues. Additionally, a boundary enhancement module has been developed to enhance performance by leveraging boundary information. Subsequently, a mask-guided fusion module is proposed to generate fine-grained results by integrating coarse prediction maps with high-resolution feature maps. Our network shows superior performance without introducing unnecessary complexities. Upon acceptance of the paper, the source code will be made publicly available at <span><span>https://github.com/clelouch/TSNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128784"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust source-free domain adaptation with anti-adversarial samples training 通过反对抗样本训练实现稳健的无源领域适应性
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128777
Zhirui Wang, Liu Yang, Yahong Han
Unsupervised source-free domain adaptation methods aim to transfer knowledge acquired from labeled source domain to an unlabeled target domain, where the source data are not accessible during target domain adaptation and it is prohibited to minimize domain gap by pairwise calculation of the samples from the source and target domains. Previous approaches assign pseudo label to target data using pre-trained source model to progressively train the target model in a self-learning manner. However, incorrect pseudo label may adversely affect prediction in the target domain. Furthermore, they overlook the generalization ability of the source model, which primarily affects the initial prediction of the target model. Therefore, we propose an effective framework based on adversarial training to train the target model for source-free domain adaptation. Specifically, adversarial training is an effective technique to enhance the robustness of deep neural networks. By generating anti-adversarial examples and adversarial examples, the pseudo label of target data can be corrected further by adversarial training and a more optimal performance in both accuracy and robustness is achieved. Moreover, owing to the inherent domain distribution difference between source and target domains, mislabeled target samples exist inevitably. So a target sample filtering scheme is proposed to refine pseudo label to further improve the prediction capability on the target domain. Experiments conducted on benchmark tasks demonstrate that the proposed method outperforms existing approaches.
无监督无源域适应方法旨在将从有标签源域获取的知识转移到无标签目标域,在目标域适应过程中无法获取源数据,并且禁止通过对源域和目标域的样本进行成对计算来最小化域差距。以往的方法是使用预先训练好的源模型为目标数据分配伪标签,以自学的方式逐步训练目标模型。然而,错误的伪标签可能会对目标域的预测产生不利影响。此外,它们还忽略了源模型的泛化能力,而这主要会影响目标模型的初始预测。因此,我们提出了一种基于对抗训练的有效框架,用于训练无源领域适应的目标模型。具体来说,对抗训练是增强深度神经网络鲁棒性的有效技术。通过生成反对抗示例和对抗示例,目标数据的伪标签可以通过对抗训练得到进一步修正,在准确性和鲁棒性方面都能获得更优的表现。此外,由于源域和目标域之间固有的域分布差异,错误标注的目标样本不可避免地存在。因此,我们提出了一种目标样本过滤方案来完善伪标签,从而进一步提高对目标域的预测能力。在基准任务上进行的实验表明,所提出的方法优于现有方法。
{"title":"Robust source-free domain adaptation with anti-adversarial samples training","authors":"Zhirui Wang,&nbsp;Liu Yang,&nbsp;Yahong Han","doi":"10.1016/j.neucom.2024.128777","DOIUrl":"10.1016/j.neucom.2024.128777","url":null,"abstract":"<div><div>Unsupervised source-free domain adaptation methods aim to transfer knowledge acquired from labeled source domain to an unlabeled target domain, where the source data are not accessible during target domain adaptation and it is prohibited to minimize domain gap by pairwise calculation of the samples from the source and target domains. Previous approaches assign pseudo label to target data using pre-trained source model to progressively train the target model in a self-learning manner. However, incorrect pseudo label may adversely affect prediction in the target domain. Furthermore, they overlook the generalization ability of the source model, which primarily affects the initial prediction of the target model. Therefore, we propose an effective framework based on adversarial training to train the target model for source-free domain adaptation. Specifically, adversarial training is an effective technique to enhance the robustness of deep neural networks. By generating anti-adversarial examples and adversarial examples, the pseudo label of target data can be corrected further by adversarial training and a more optimal performance in both accuracy and robustness is achieved. Moreover, owing to the inherent domain distribution difference between source and target domains, mislabeled target samples exist inevitably. So a target sample filtering scheme is proposed to refine pseudo label to further improve the prediction capability on the target domain. Experiments conducted on benchmark tasks demonstrate that the proposed method outperforms existing approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128777"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable few-shot learning with online attribute selection 利用在线属性选择进行可解释的少量学习
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128755
Mohammad Reza Zarei, Majid Komeili
Few-shot learning (FSL) presents a challenging learning problem in which only a few samples are available for each class. Decision interpretation is more important in few-shot classification due to a greater chance of error compared to traditional classification. However, the majority of the previous FSL methods are black-box models. In this paper, we propose an inherently interpretable model for FSL based on human-friendly attributes. Previously, human-friendly attributes have been utilized to train models with the potential for human interaction and interpretability. However, such approaches are not directly extendible to the few-shot classification scenario. Moreover, we propose an online attribute selection mechanism to effectively filter out irrelevant attributes in each episode. The attribute selection mechanism improves accuracy and helps with interpretability by reducing the number of attributes that participate in each episode. We further propose a mechanism that automatically detects the episodes where the pool of available human-friendly attributes is insufficient, and subsequently augments it by engaging some learned unknown attributes. We demonstrate that the proposed method achieves results on par with black-box few-shot learning models on four widely used datasets. We also empirically evaluate the level of decision alignment between different models and human understanding and show that our model outperforms the comparison methods based on this criterion.
少量样本学习(FSL)提出了一个具有挑战性的学习问题,即每个类别只有少量样本。与传统的分类方法相比,由于出错的几率更大,因此决策解释在少数几次分类中显得更为重要。然而,以往的 FSL 方法大多是黑箱模型。在本文中,我们提出了一种基于人类友好属性的 FSL 固有可解释模型。在此之前,人类友好属性已被用于训练具有人机交互和可解释性潜力的模型。但是,这种方法无法直接扩展到少量分类场景。此外,我们还提出了一种在线属性选择机制,以有效过滤掉每一集中的无关属性。属性选择机制通过减少参与每个事件的属性数量,提高了准确性并有助于提高可解释性。我们进一步提出了一种机制,可自动检测可用的人类友好属性库不足的情节,并随后通过使用一些学习到的未知属性来增加属性库。我们在四个广泛使用的数据集上证明,所提出的方法取得了与黑盒少量学习模型相当的结果。我们还根据经验评估了不同模型与人类理解之间的决策一致性水平,结果表明我们的模型优于基于这一标准的比较方法。
{"title":"Interpretable few-shot learning with online attribute selection","authors":"Mohammad Reza Zarei,&nbsp;Majid Komeili","doi":"10.1016/j.neucom.2024.128755","DOIUrl":"10.1016/j.neucom.2024.128755","url":null,"abstract":"<div><div>Few-shot learning (FSL) presents a challenging learning problem in which only a few samples are available for each class. Decision interpretation is more important in few-shot classification due to a greater chance of error compared to traditional classification. However, the majority of the previous FSL methods are black-box models. In this paper, we propose an inherently interpretable model for FSL based on human-friendly attributes. Previously, human-friendly attributes have been utilized to train models with the potential for human interaction and interpretability. However, such approaches are not directly extendible to the few-shot classification scenario. Moreover, we propose an online attribute selection mechanism to effectively filter out irrelevant attributes in each episode. The attribute selection mechanism improves accuracy and helps with interpretability by reducing the number of attributes that participate in each episode. We further propose a mechanism that automatically detects the episodes where the pool of available human-friendly attributes is insufficient, and subsequently augments it by engaging some learned unknown attributes. We demonstrate that the proposed method achieves results on par with black-box few-shot learning models on four widely used datasets. We also empirically evaluate the level of decision alignment between different models and human understanding and show that our model outperforms the comparison methods based on this criterion.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128755"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preserving text space integrity for robust compositional zero-shot learning via mixture of pretrained experts 通过混合预训练专家,保持文本空间的完整性,实现稳健的合成零点学习
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128773
Zehua Hao, Fang Liu, Licheng Jiao, Yaoyang Du, Shuo Li, Hao Wang, Pengfang Li, Xu Liu, Puhua Chen
In the current landscape of Compositional Zero-Shot Learning (CZSL) methods that leverage CLIP, the predominant approach is based on prompt learning paradigms. These methods encounter significant computational complexity when dealing with a large number of categories. Additionally, when confronted with new classification tasks, there is a necessity to learn the prompts again, which can be both time-consuming and resource-intensive. To address these challenges, We present a new methodology, named the Mixture of Pretrained Expert (MoPE), for enhancing Compositional Zero-shot Learning through Logit-Level Fusion with Multi Expert Fusion Module. The MoPE skillfully blends the benefits of extensive pre-trained models like CLIP, Bert, GPT-3 and Word2Vec for effectively tackling Compositional Zero-shot Learning. Firstly, we extract the text label space for each language model individually, then map the visual feature vectors to their respective text spaces. This maintains the integrity and structure of the original text space. During this process, the pre-trained expert parameters are kept frozen. The mapping of visual features to the corresponding text spaces is subject to learning and could be considered as multiple learnable visual experts. In the model fusion phase, we propose a new fusion strategy that features a gating mechanism that adjusts the contributions of various models dynamically. This enables our approach to adapt more effectively to a range of tasks and data sets. The method’s robustness is demonstrated by the fact that the language model is not tailored to specific downstream task datasets or losses. This preserves the larger model’s topology and expands the potential for application. Preliminary experiments conducted on the UT-Zappos, AO-Clever, and C-GQA datasets indicate that MoPE performs competitively when compared to existing techniques.
在当前利用 CLIP 的合成零点学习(CZSL)方法中,最主要的方法是基于提示学习范式。这些方法在处理大量类别时会遇到很大的计算复杂性。此外,在面对新的分类任务时,有必要再次学习提示,这可能既耗时又耗资源。为了应对这些挑战,我们提出了一种名为 "预训练专家混合物"(MoPE)的新方法,通过与多专家融合模块的 Logit 级融合来增强合成零点学习。MoPE 巧妙地融合了大量预训练模型(如 CLIP、Bert、GPT-3 和 Word2Vec)的优点,从而有效地解决了合成零镜头学习问题。首先,我们为每个语言模型单独提取文本标签空间,然后将视觉特征向量映射到各自的文本空间。这就保持了原始文本空间的完整性和结构性。在此过程中,预先训练好的专家参数将被冻结。视觉特征到相应文本空间的映射是可以学习的,可以视为多个可学习的视觉专家。在模型融合阶段,我们提出了一种新的融合策略,其特点是采用门控机制,动态调整各种模型的贡献。这使我们的方法能够更有效地适应一系列任务和数据集。该方法的稳健性体现在语言模型并不针对特定的下游任务数据集或损失而量身定制。这就保留了更大模型的拓扑结构,并扩大了应用潜力。在UT-Zappos、AO-Clever 和 C-GQA 数据集上进行的初步实验表明,与现有技术相比,MoPE 的性能极具竞争力。
{"title":"Preserving text space integrity for robust compositional zero-shot learning via mixture of pretrained experts","authors":"Zehua Hao,&nbsp;Fang Liu,&nbsp;Licheng Jiao,&nbsp;Yaoyang Du,&nbsp;Shuo Li,&nbsp;Hao Wang,&nbsp;Pengfang Li,&nbsp;Xu Liu,&nbsp;Puhua Chen","doi":"10.1016/j.neucom.2024.128773","DOIUrl":"10.1016/j.neucom.2024.128773","url":null,"abstract":"<div><div>In the current landscape of Compositional Zero-Shot Learning (CZSL) methods that leverage CLIP, the predominant approach is based on prompt learning paradigms. These methods encounter significant computational complexity when dealing with a large number of categories. Additionally, when confronted with new classification tasks, there is a necessity to learn the prompts again, which can be both time-consuming and resource-intensive. To address these challenges, We present a new methodology, named the <strong>M</strong>ixture of <strong>P</strong>retrained <strong>E</strong>xpert (MoPE), for enhancing Compositional Zero-shot Learning through Logit-Level Fusion with Multi Expert Fusion Module. The MoPE skillfully blends the benefits of extensive pre-trained models like CLIP, Bert, GPT-3 and Word2Vec for effectively tackling Compositional Zero-shot Learning. Firstly, we extract the text label space for each language model individually, then map the visual feature vectors to their respective text spaces. This maintains the integrity and structure of the original text space. During this process, the pre-trained expert parameters are kept frozen. The mapping of visual features to the corresponding text spaces is subject to learning and could be considered as multiple learnable visual experts. In the model fusion phase, we propose a new fusion strategy that features a gating mechanism that adjusts the contributions of various models dynamically. This enables our approach to adapt more effectively to a range of tasks and data sets. The method’s robustness is demonstrated by the fact that the language model is not tailored to specific downstream task datasets or losses. This preserves the larger model’s topology and expands the potential for application. Preliminary experiments conducted on the UT-Zappos, AO-Clever, and C-GQA datasets indicate that MoPE performs competitively when compared to existing techniques.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128773"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial diffusion for few-shot scene adaptive video anomaly detection 逆向扩散用于少镜头场景自适应视频异常检测
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128796
Yumna Zahid , Christine Zarges , Bernie Tiddeman , Jungong Han
Few-shot anomaly detection for video surveillance is challenging due to the diverse nature of target domains. Existing methodologies treat it as a one-class classification problem, training on a reduced sample of nominal scenes. The focus is on either reconstructive or predictive frame methodologies to learn a manifold against which outliers can be detected during inference. We posit that the quality of image reconstruction or future frame prediction is inherently important in identifying anomalous pixels in video frames. In this paper, we enhance the image synthesis and mode coverage for video anomaly detection (VAD) by integrating a Denoising Diffusion model with a future frame prediction model. Our novel VAD pipeline includes a Generative Adversarial Network combined with denoising diffusion to learn the underlying non-anomalous data distribution and generate in one-step high fidelity future-frame samples. We further regularize the image reconstruction with perceptual quality metrics such as Multi-scale Structural Similarity Index Measure and Peak Signal-to-Noise Ratio, ensuring high-quality output under few episodic training iterations. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques across multiple benchmarks, validating that high-quality image synthesis in frame prediction leads to robust anomaly detection in videos.
由于目标领域的多样性,视频监控的少镜头异常检测具有挑战性。现有的方法将其视为单类分类问题,在减少的标称场景样本上进行训练。重点在于重建或预测帧方法,以学习一个流形,在推理过程中可根据该流形检测异常值。我们认为,图像重建或未来帧预测的质量对于识别视频帧中的异常像素至关重要。在本文中,我们通过整合去噪扩散模型和未来帧预测模型,提高了视频异常检测(VAD)的图像合成和模式覆盖率。我们新颖的 VAD 管道包括一个生成对抗网络(Generative Adversarial Network),该网络与去噪扩散相结合,可学习底层非异常数据分布,并一步生成高保真的未来帧样本。我们还利用多尺度结构相似性指数测量和峰值信噪比等感知质量指标对图像重建进行了进一步的规范化处理,确保在少量偶发训练迭代的情况下实现高质量的输出。广泛的实验证明,我们的方法在多个基准测试中的表现优于最先进的技术,从而验证了在帧预测中进行高质量图像合成可实现稳健的视频异常检测。
{"title":"Adversarial diffusion for few-shot scene adaptive video anomaly detection","authors":"Yumna Zahid ,&nbsp;Christine Zarges ,&nbsp;Bernie Tiddeman ,&nbsp;Jungong Han","doi":"10.1016/j.neucom.2024.128796","DOIUrl":"10.1016/j.neucom.2024.128796","url":null,"abstract":"<div><div>Few-shot anomaly detection for video surveillance is challenging due to the diverse nature of target domains. Existing methodologies treat it as a one-class classification problem, training on a reduced sample of nominal scenes. The focus is on either reconstructive or predictive frame methodologies to learn a manifold against which outliers can be detected during inference. We posit that the quality of image reconstruction or future frame prediction is inherently important in identifying anomalous pixels in video frames. In this paper, we enhance the image synthesis and mode coverage for video anomaly detection (VAD) by integrating a <em>Denoising Diffusion</em> model with a future frame prediction model. Our novel VAD pipeline includes a <em>Generative Adversarial Network</em> combined with denoising diffusion to learn the underlying non-anomalous data distribution and generate in one-step high fidelity future-frame samples. We further regularize the image reconstruction with perceptual quality metrics such as <em>Multi-scale Structural Similarity Index Measure</em> and <em>Peak Signal-to-Noise Ratio</em>, ensuring high-quality output under few episodic training iterations. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques across multiple benchmarks, validating that high-quality image synthesis in frame prediction leads to robust anomaly detection in videos.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128796"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physically-guided open vocabulary segmentation with weighted patched alignment loss 利用加权修补对齐损失进行物理引导的开放词汇分割
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128788
Weide Liu , Jieming Lou , Xingxing Wang , Wei Zhou , Jun Cheng , Xulei Yang
Open vocabulary segmentation is a challenging task that aims to segment out the thousands of unseen categories. Directly applying CLIP to open-vocabulary semantic segmentation is challenging due to the granularity gap between its image-level contrastive learning and the pixel-level recognition required for segmentation. To address these challenges, we propose a unified pipeline that leverages physical structure regularization to enhance the generalizability and robustness of open vocabulary segmentation. By incorporating physical structure information, which is independent of the training data, we aim to reduce bias and improve the model’s performance on unseen classes. We utilize low-level structures such as edges and keypoints as regularization terms, as they are easier to obtain and strongly correlated with segmentation boundary information. These structures are used as pseudo-ground truth to supervise the model. Furthermore, inspired by the effectiveness of comparative learning in human cognition, we introduce the weighted patched alignment loss. This loss function contrasts similar and dissimilar samples to acquire low-dimensional representations that capture the distinctions between different object classes. By incorporating physical knowledge and leveraging weighted patched alignment loss, we aim to improve the model’s generalizability, robustness, and capability to recognize diverse object classes. The experiments on the COCO Stuff, Pascal VOC, Pascal Context-59, Pascal Context-459, ADE20K-150, and ADE20K-847 datasets demonstrate that our proposed method consistently improves baselines and achieves new state-of-the-art in the open vocabulary segmentation task.
开放词汇分割是一项具有挑战性的任务,其目的是分割出数千个未见类别。将 CLIP 直接应用于开放词汇语义分割具有挑战性,因为其图像级对比学习与分割所需的像素级识别之间存在粒度差距。为了应对这些挑战,我们提出了一个统一的管道,利用物理结构正则化来增强开放词汇分割的通用性和鲁棒性。通过纳入独立于训练数据的物理结构信息,我们旨在减少偏差并提高模型在未见类别上的性能。我们利用边缘和关键点等低级结构作为正则化条件,因为它们更容易获得,而且与分割边界信息密切相关。这些结构被用作监督模型的伪地面真实信息。此外,受人类认知中比较学习有效性的启发,我们引入了加权修补对齐损失。该损失函数对相似样本和不相似样本进行对比,从而获得低维表征,捕捉不同物体类别之间的区别。通过结合物理知识和利用加权修补配准损失,我们旨在提高模型的通用性、鲁棒性和识别不同物体类别的能力。在 COCO Stuff、Pascal VOC、Pascal Context-59、Pascal Context-459、ADE20K-150 和 ADE20K-847 数据集上的实验表明,我们提出的方法在开放词汇分割任务中不断改进基线并达到新的一流水平。
{"title":"Physically-guided open vocabulary segmentation with weighted patched alignment loss","authors":"Weide Liu ,&nbsp;Jieming Lou ,&nbsp;Xingxing Wang ,&nbsp;Wei Zhou ,&nbsp;Jun Cheng ,&nbsp;Xulei Yang","doi":"10.1016/j.neucom.2024.128788","DOIUrl":"10.1016/j.neucom.2024.128788","url":null,"abstract":"<div><div>Open vocabulary segmentation is a challenging task that aims to segment out the thousands of unseen categories. Directly applying CLIP to open-vocabulary semantic segmentation is challenging due to the granularity gap between its image-level contrastive learning and the pixel-level recognition required for segmentation. To address these challenges, we propose a unified pipeline that leverages physical structure regularization to enhance the generalizability and robustness of open vocabulary segmentation. By incorporating physical structure information, which is independent of the training data, we aim to reduce bias and improve the model’s performance on unseen classes. We utilize low-level structures such as edges and keypoints as regularization terms, as they are easier to obtain and strongly correlated with segmentation boundary information. These structures are used as pseudo-ground truth to supervise the model. Furthermore, inspired by the effectiveness of comparative learning in human cognition, we introduce the weighted patched alignment loss. This loss function contrasts similar and dissimilar samples to acquire low-dimensional representations that capture the distinctions between different object classes. By incorporating physical knowledge and leveraging weighted patched alignment loss, we aim to improve the model’s generalizability, robustness, and capability to recognize diverse object classes. The experiments on the COCO Stuff, Pascal VOC, Pascal Context-59, Pascal Context-459, ADE20K-150, and ADE20K-847 datasets demonstrate that our proposed method consistently improves baselines and achieves new state-of-the-art in the open vocabulary segmentation task.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128788"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active self-semi-supervised learning for few labeled samples 针对少量标注样本的主动半监督学习
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128772
Ziting Wen , Oscar Pizarro , Stefan Williams
Training deep models with limited annotations poses a significant challenge when applied to diverse practical domains. Employing semi-supervised learning alongside the self-supervised model offers the potential to enhance label efficiency. However, this approach faces a bottleneck in reducing the need for labels. We observed that the semi-supervised model disrupts valuable information from self-supervised learning when only limited labels are available. To address this issue, this paper proposes a simple yet effective framework, active self-semi-supervised learning (AS3L). AS3L bootstraps semi-supervised models with prior pseudo-labels (PPL). These PPLs are obtained by label propagation over self-supervised features. Based on the observations the accuracy of PPL is not only affected by the quality of features but also by the selection of the labeled samples. We develop active learning and label propagation strategies to obtain accurate PPL. Consequently, our framework can significantly improve the performance of models in the case of limited annotations while demonstrating fast convergence. On the image classification tasks across four datasets, our method outperforms the baseline by an average of 5.4%. Additionally, it achieves the same accuracy as the baseline method in about 1/3 of the training time.
在应用于各种实际领域时,利用有限的注释来训练深度模型是一项重大挑战。在自监督模型的同时采用半监督学习,有可能提高标签效率。然而,这种方法在减少标签需求方面面临瓶颈。我们发现,当只有有限的标签时,半监督模型会破坏自监督学习的宝贵信息。为了解决这个问题,本文提出了一个简单而有效的框架--主动自半监督学习(AS3L)。AS3L 利用先验伪标签 (PPL) 引导半监督模型。这些 PPL 是通过自监督特征的标签传播获得的。根据观察,PPL 的准确性不仅受特征质量的影响,还受标签样本选择的影响。我们开发了主动学习和标签传播策略,以获得准确的 PPL。因此,我们的框架可以在注释有限的情况下显著提高模型的性能,同时表现出快速收敛性。在四个数据集的图像分类任务中,我们的方法平均比基线方法高出 5.4%。此外,它还能在大约 1/3 的训练时间内达到与基准方法相同的准确率。
{"title":"Active self-semi-supervised learning for few labeled samples","authors":"Ziting Wen ,&nbsp;Oscar Pizarro ,&nbsp;Stefan Williams","doi":"10.1016/j.neucom.2024.128772","DOIUrl":"10.1016/j.neucom.2024.128772","url":null,"abstract":"<div><div>Training deep models with limited annotations poses a significant challenge when applied to diverse practical domains. Employing semi-supervised learning alongside the self-supervised model offers the potential to enhance label efficiency. However, this approach faces a bottleneck in reducing the need for labels. We observed that the semi-supervised model disrupts valuable information from self-supervised learning when only limited labels are available. To address this issue, this paper proposes a simple yet effective framework, active self-semi-supervised learning (AS3L). AS3L bootstraps semi-supervised models with prior pseudo-labels (PPL). These PPLs are obtained by label propagation over self-supervised features. Based on the observations the accuracy of PPL is not only affected by the quality of features but also by the selection of the labeled samples. We develop active learning and label propagation strategies to obtain accurate PPL. Consequently, our framework can significantly improve the performance of models in the case of limited annotations while demonstrating fast convergence. On the image classification tasks across four datasets, our method outperforms the baseline by an average of 5.4%. Additionally, it achieves the same accuracy as the baseline method in about 1/3 of the training time.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128772"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-vocabulary handling and topic quality control strategies in streaming topic models 流式主题模型中的词汇外处理和主题质量控制策略
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.neucom.2024.128757
Tung Nguyen , Tung Pham , Linh Ngo Van, Ha-Bang Ban, Khoat Than
Topic models have become ubiquitous tools for analyzing streaming data. However, existing streaming topic models suffer from several limitations when applied to real-world data streams. This includes the inability to accommodate evolving vocabularies and control topic quality throughout the streaming process. In this paper, we propose a novel streaming topic modeling approach that dynamically adapts to the changing nature of data streams. Our method leverages Byte-Pair Encoding embedding (BPEmb) to resolve the out-of-vocabulary problem that arises with new words in the stream. Additionally, we introduce a topic change variable that provides fine-grained control over topics’ parameter updates and present a preservation approach to retain high-coherence topics at each time step, helping preserve semantic quality. To further enhance model adaptability, our method allows dynamical adjustment of topic space size as needed. To the best of our knowledge, we are the first to address the expansion of vocabulary and maintain topic quality during the streaming process. Extensive experiments show the superior effectiveness of our method.
主题模型已成为分析流数据的普遍工具。然而,现有的流式主题模型在应用于现实世界的数据流时存在一些局限性。这包括无法在整个流式处理过程中适应不断发展的词汇表和控制话题质量。在本文中,我们提出了一种新颖的流式主题建模方法,可动态适应数据流不断变化的性质。我们的方法利用字节对编码嵌入(BPEmb)来解决因数据流中出现新词而产生的词汇不足问题。此外,我们还引入了一个主题变化变量,对主题的参数更新进行精细控制,并提出了一种保存方法,在每个时间步骤中保留高一致性主题,帮助保持语义质量。为了进一步提高模型的适应性,我们的方法允许根据需要动态调整主题空间的大小。据我们所知,我们是第一个在流式处理过程中解决词汇扩展和保持主题质量的方法。广泛的实验表明,我们的方法非常有效。
{"title":"Out-of-vocabulary handling and topic quality control strategies in streaming topic models","authors":"Tung Nguyen ,&nbsp;Tung Pham ,&nbsp;Linh Ngo Van,&nbsp;Ha-Bang Ban,&nbsp;Khoat Than","doi":"10.1016/j.neucom.2024.128757","DOIUrl":"10.1016/j.neucom.2024.128757","url":null,"abstract":"<div><div>Topic models have become ubiquitous tools for analyzing streaming data. However, existing streaming topic models suffer from several limitations when applied to real-world data streams. This includes the inability to accommodate evolving vocabularies and control topic quality throughout the streaming process. In this paper, we propose a novel streaming topic modeling approach that dynamically adapts to the changing nature of data streams. Our method leverages Byte-Pair Encoding embedding (BPEmb) to resolve the out-of-vocabulary problem that arises with new words in the stream. Additionally, we introduce a topic change variable that provides fine-grained control over topics’ parameter updates and present a preservation approach to retain high-coherence topics at each time step, helping preserve semantic quality. To further enhance model adaptability, our method allows dynamical adjustment of topic space size as needed. To the best of our knowledge, we are the first to address the expansion of vocabulary and maintain topic quality during the streaming process. Extensive experiments show the superior effectiveness of our method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128757"},"PeriodicalIF":5.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1