首页 > 最新文献

Neurocomputing最新文献

英文 中文
S2CA: Shared Concept Prototypes and Concept-level Alignment for text–video retrieval S2CA:共享概念原型和概念级对齐,用于文本-视频检索
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128851
Yuxiao Li, Yu Xin, Jiangbo Qian, Yihong Dong
Text–video retrieval, as a fundamental task of cross-modal learning, relies on effectively establishing the semantic association between text and video. At present, mainstream semantic alignment methods for text–video adopt instance-level alignment strategies, ignoring the fine-grained concept association and the “concept-level alignment” characteristics of text–video. In this regard, we propose Shared Concept Prototypes and Concept-level Alignment (S2CA) to achieve concept-level alignment. Specifically, we utilize the text–video Shared Concept Prototypes mechanism to bridge the correspondence between text and video. On this basis, we use cross-attention and Gumbel-softmax to obtain Discrete Concept Allocation Matrices and then assign text and video tokens to corresponding concept prototypes. In this way, texts and videos are decoupled into multiple Conceptual Aggregated Features, thereby achieving Concept-level Alignment. In addition, we use CLIP as the teacher model and adopt the Align-Transform-Reconstruct distillation framework to strengthen the multimodal semantic learning ability. The extensive experiments on MSR-VTT, DiDeMo, ActivityNet and MSVD prove the effectiveness of our method.
文本-视频检索作为跨模态学习的一项基本任务,有赖于有效建立文本与视频之间的语义关联。目前,主流的文本-视频语义对齐方法采用实例级对齐策略,忽略了文本-视频的细粒度概念关联和 "概念级对齐 "特性。为此,我们提出了共享概念原型和概念级对齐(S2CA)来实现概念级对齐。具体来说,我们利用文本-视频共享概念原型机制来弥合文本和视频之间的对应关系。在此基础上,我们使用交叉注意和 Gumbel-softmax 获得离散概念分配矩阵,然后将文本和视频标记分配给相应的概念原型。这样,文本和视频就被解耦为多个概念聚合特征,从而实现了概念级对齐。此外,我们使用 CLIP 作为教师模型,并采用 Align-Transform-Reconstruct 提炼框架来加强多模态语义学习能力。在 MSR-VTT、DiDeMo、ActivityNet 和 MSVD 上的大量实验证明了我们方法的有效性。
{"title":"S2CA: Shared Concept Prototypes and Concept-level Alignment for text–video retrieval","authors":"Yuxiao Li,&nbsp;Yu Xin,&nbsp;Jiangbo Qian,&nbsp;Yihong Dong","doi":"10.1016/j.neucom.2024.128851","DOIUrl":"10.1016/j.neucom.2024.128851","url":null,"abstract":"<div><div>Text–video retrieval, as a fundamental task of cross-modal learning, relies on effectively establishing the semantic association between text and video. At present, mainstream semantic alignment methods for text–video adopt instance-level alignment strategies, ignoring the fine-grained concept association and the “concept-level alignment” characteristics of text–video. In this regard, we propose <strong>S</strong>hared <strong>C</strong>oncept Prototypes and <strong>C</strong>oncept-level <strong>A</strong>lignment (<strong>S2CA</strong>) to achieve concept-level alignment. Specifically, we utilize the text–video <strong>Shared Concept Prototypes</strong> mechanism to bridge the correspondence between text and video. On this basis, we use cross-attention and Gumbel-softmax to obtain <strong>Discrete Concept Allocation Matrices</strong> and then assign text and video tokens to corresponding concept prototypes. In this way, texts and videos are decoupled into multiple <strong>Conceptual Aggregated Features</strong>, thereby achieving <strong>Concept-level Alignment</strong>. In addition, we use CLIP as the teacher model and adopt the Align-Transform-Reconstruct distillation framework to strengthen the multimodal semantic learning ability. The extensive experiments on MSR-VTT, DiDeMo, ActivityNet and MSVD prove the effectiveness of our method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128851"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analysis of pre-trained stable diffusion models through a semantic lens 从语义角度分析预训练稳定扩散模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128846
Simone Bonechi , Paolo Andreini , Barbara Toniella Corradini , Franco Scarselli
Recently, generative models for images have garnered remarkable attention, due to their effective generalization ability and their capability to generate highly detailed and realistic content. Indeed, the success of generative networks (e.g., BigGAN, StyleGAN, Diffusion Models) has driven researchers to develop increasingly powerful models. As a result, we have observed an unprecedented improvement in terms of both image resolution and realism, making generated images indistinguishable from real ones. In this work, we focus on a family of generative models known as Stable Diffusion Models (SDMs), which have recently emerged due to their ability to generate images in a multimodal setup (i.e., from a textual prompt) and have outperformed adversarial networks by learning to reverse a diffusion process. Given the complexity of these models that makes it hard to retrain them, researchers started to exploit pre-trained SDMs to perform downstream tasks (e.g., classification and segmentation), where semantics plays a fundamental role. In this context, understanding how well the model preserves semantic information may be crucial to improve its performance.
This paper presents an approach aimed at providing insights into the properties of a pre-trained SDM through the semantic lens. In particular, we analyze the features extracted by the U-Net within a SDM to explore whether and how the semantic information of an image is preserved in its internal representation. For this purpose, different distance measures are compared, and an ablation study is performed to select the layer (or combination of layers) of the U-Net that best preserves the semantic information. We also seek to understand whether semantics are preserved when the image undergoes simple transformations (e.g., rotation, flip, scale, padding, crop, and shift) and for a different number of diffusion denoising steps. To evaluate these properties, we consider popular benchmarks for semantic segmentation tasks (e.g., COCO, and Pascal-VOC). Our experiments suggest that the first encoder layer at 16×16 resolution effectively preserves semantic information. However, increasing inference steps (even for a minimal amount of noise) and applying various image transformations can affect the diffusion U-Net’s internal feature representation. Additionally, we propose some examples taken from a video benchmark (DAVIS dataset), where we investigate if an object instance within a video preserves its internal representation even after several frames. Our findings suggest that the internal object representation remains consistent across multiple frames in a video, as long as the configuration changes are not excessive.
最近,图像生成模型因其有效的泛化能力和生成高度精细逼真内容的能力而备受关注。事实上,生成式网络(如 BigGAN、StyleGAN、扩散模型)的成功促使研究人员开发出越来越强大的模型。因此,我们观察到在图像分辨率和逼真度方面都有了前所未有的提高,使生成的图像与真实图像无异。在这项工作中,我们将重点放在被称为稳定扩散模型(SDM)的生成模型系列上,这些模型由于能够在多模态设置(即根据文本提示)下生成图像而在最近崭露头角,并通过学习逆向扩散过程而超越了对抗网络。鉴于这些模型的复杂性,很难对其进行再训练,研究人员开始利用预先训练好的 SDM 执行下游任务(如分类和分割),在这些任务中,语义起着至关重要的作用。在这种情况下,了解模型在多大程度上保留了语义信息可能是提高其性能的关键。本文提出了一种方法,旨在通过语义视角深入了解预训练 SDM 的特性。特别是,我们分析了 SDM 中 U-Net 提取的特征,以探索图像的语义信息是否以及如何在其内部表示中得到保留。为此,我们对不同的距离测量方法进行了比较,并开展了一项消融研究,以选择最能保留语义信息的 U-Net 层(或层的组合)。我们还试图了解,当图像发生简单变换(如旋转、翻转、缩放、填充、裁剪和移位)以及不同数量的扩散去噪步骤时,语义是否会得到保留。为了评估这些特性,我们考虑了语义分割任务的流行基准(如 COCO 和 Pascal-VOC)。我们的实验表明,16×16 分辨率的第一个编码器层能有效地保留语义信息。然而,增加推理步骤(即使是最小的噪声)和应用各种图像变换都会影响扩散 U-Net 的内部特征表示。此外,我们还提出了一些来自视频基准(DAVIS 数据集)的示例,研究视频中的对象实例是否会在若干帧后保留其内部表示。我们的研究结果表明,只要配置变化不是太大,内部对象表示在视频的多个帧中保持一致。
{"title":"An analysis of pre-trained stable diffusion models through a semantic lens","authors":"Simone Bonechi ,&nbsp;Paolo Andreini ,&nbsp;Barbara Toniella Corradini ,&nbsp;Franco Scarselli","doi":"10.1016/j.neucom.2024.128846","DOIUrl":"10.1016/j.neucom.2024.128846","url":null,"abstract":"<div><div>Recently, generative models for images have garnered remarkable attention, due to their effective generalization ability and their capability to generate highly detailed and realistic content. Indeed, the success of generative networks (<em>e.g.</em>, BigGAN, StyleGAN, Diffusion Models) has driven researchers to develop increasingly powerful models. As a result, we have observed an unprecedented improvement in terms of both image resolution and realism, making generated images indistinguishable from real ones. In this work, we focus on a family of generative models known as Stable Diffusion Models (SDMs), which have recently emerged due to their ability to generate images in a multimodal setup (<em>i.e.</em>, from a textual prompt) and have outperformed adversarial networks by learning to reverse a diffusion process. Given the complexity of these models that makes it hard to retrain them, researchers started to exploit pre-trained SDMs to perform downstream tasks (<em>e.g.</em>, classification and segmentation), where semantics plays a fundamental role. In this context, <em>understanding how well the model preserves semantic information may be crucial to improve its performance.</em></div><div>This paper presents an approach aimed at providing insights into the properties of a pre-trained SDM through the semantic lens. In particular, we analyze the features extracted by the U-Net within a SDM to explore whether and how the semantic information of an image is preserved in its internal representation. For this purpose, different distance measures are compared, and an ablation study is performed to select the layer (or combination of layers) of the U-Net that best preserves the semantic information. We also seek to understand whether semantics are preserved when the image undergoes simple transformations (<em>e.g.</em>, rotation, flip, scale, padding, crop, and shift) and for a different number of diffusion denoising steps. To evaluate these properties, we consider popular benchmarks for semantic segmentation tasks (<em>e.g.</em>, COCO, and Pascal-VOC). Our experiments suggest that the first encoder layer at <span><math><mrow><mn>16</mn><mi>×</mi><mn>16</mn></mrow></math></span> resolution effectively preserves semantic information. However, increasing inference steps (even for a minimal amount of noise) and applying various image transformations can affect the diffusion U-Net’s internal feature representation. Additionally, we propose some examples taken from a video benchmark (DAVIS dataset), where we investigate if an object instance within a video preserves its internal representation even after several frames. Our findings suggest that the internal object representation remains consistent across multiple frames in a video, as long as the configuration changes are not excessive.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128846"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed-time event-triggered pinning synchronization of complex network via aperiodically intermittent control 通过非周期性间歇控制实现复杂网络的固定时间事件触发引脚同步
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128818
Junru Zhang, Jian-An Wang, Jie Zhang, Mingjie Li, Zhicheng Zhao, Xinyu Wen
This paper studies the fixed-time event-triggered pinning synchronization problem for complex network using aperiodic intermittent control. A novel fixed-time stability lemma with aperiodic intermittent characteristic is first proposed. By designing appropriate event-triggered aperiodic intermittent pinning controller (ETAIPC) based on the average control rate, several conditions are derived to ensure the fixed-time synchronization. The upper bound of setting-time is independent of any initial values and only concerns with design parameters, network size and node dimension. A simple to execute selection algorithm is adopted to renew the pinning node set. The Zeno behavior is also excluded through a rigorous theoretical analysis. Simulation examples are employed to demonstrate the efficacy of the obtained method.
本文研究了使用非周期性间歇控制的复杂网络的固定时间事件触发引脚同步问题。首先提出了一个具有非周期性间歇特性的新型固定时间稳定性定理。通过基于平均控制率设计适当的事件触发非周期性间歇引脚控制器(ETAIPC),得出了几个确保固定时间同步的条件。设定时间的上限与任何初始值无关,只与设计参数、网络大小和节点尺寸有关。采用了一种易于执行的选择算法来更新引脚节点集。通过严格的理论分析,还排除了 Zeno 行为。仿真实例证明了所获方法的有效性。
{"title":"Fixed-time event-triggered pinning synchronization of complex network via aperiodically intermittent control","authors":"Junru Zhang,&nbsp;Jian-An Wang,&nbsp;Jie Zhang,&nbsp;Mingjie Li,&nbsp;Zhicheng Zhao,&nbsp;Xinyu Wen","doi":"10.1016/j.neucom.2024.128818","DOIUrl":"10.1016/j.neucom.2024.128818","url":null,"abstract":"<div><div>This paper studies the fixed-time event-triggered pinning synchronization problem for complex network using aperiodic intermittent control. A novel fixed-time stability lemma with aperiodic intermittent characteristic is first proposed. By designing appropriate event-triggered aperiodic intermittent pinning controller (ETAIPC) based on the average control rate, several conditions are derived to ensure the fixed-time synchronization. The upper bound of setting-time is independent of any initial values and only concerns with design parameters, network size and node dimension. A simple to execute selection algorithm is adopted to renew the pinning node set. The Zeno behavior is also excluded through a rigorous theoretical analysis. Simulation examples are employed to demonstrate the efficacy of the obtained method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128818"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum control of cluster synchronization effort in diffusion coupled nonlinear networks 扩散耦合非线性网络中集群同步努力的最小控制
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128841
Jinkui Zhang , Shidong Zhai , Wei Zhu
This paper studies the design of minimal control effort for cluster synchronization (CS) in a diffusion-coupled nonlinear network under directed graph. Under the conditions that the directed graph satisfies the cluster input equivalence condition and the system possesses a bounded Jacobian matrix, we obtain CS of diffusion-coupled nonlinear network with non-diagonal coupling matrix. Based on matrix measure and balancing theorem, we obtain the local minimization controllers for the minimal control effort of CS. Finally, the theoretical results are validated through a numerical example involving a network of coupled FitzHugh–Nagumo neurons with a general topology of interactions.
本文研究了有向图下扩散耦合非线性网络中集群同步(CS)的最小控制力设计。在有向图满足簇输入等价条件和系统具有有界雅各布矩阵的条件下,我们得到了非对角耦合矩阵的扩散耦合非线性网络的 CS。基于矩阵度量和平衡理论,我们得到了 CS 最小控制力的局部最小化控制器。最后,通过一个涉及具有一般拓扑相互作用的 FitzHugh-Nagumo 神经元耦合网络的数值示例验证了理论结果。
{"title":"Minimum control of cluster synchronization effort in diffusion coupled nonlinear networks","authors":"Jinkui Zhang ,&nbsp;Shidong Zhai ,&nbsp;Wei Zhu","doi":"10.1016/j.neucom.2024.128841","DOIUrl":"10.1016/j.neucom.2024.128841","url":null,"abstract":"<div><div>This paper studies the design of minimal control effort for cluster synchronization (CS) in a diffusion-coupled nonlinear network under directed graph. Under the conditions that the directed graph satisfies the cluster input equivalence condition and the system possesses a bounded Jacobian matrix, we obtain CS of diffusion-coupled nonlinear network with non-diagonal coupling matrix. Based on matrix measure and balancing theorem, we obtain the local minimization controllers for the minimal control effort of CS. Finally, the theoretical results are validated through a numerical example involving a network of coupled FitzHugh–Nagumo neurons with a general topology of interactions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128841"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aircraft trajectory prediction in terminal airspace with intentions derived from local history 利用从本地历史中得出的意图在终端空域预测飞机轨迹
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128843
Yifang Yin, Sheng Zhang, Yicheng Zhang, Yi Zhang, Shili Xiang
Aircraft trajectory prediction aims to estimate the future movements of aircraft in a scene, which is a crucial step for intelligent air traffic management such as capacity estimation and conflict detection. Current approaches primarily rely on inputting absolute locations, which improves the prediction accuracy but limits the model’s generalization ability to unseen environments. To bridge the gap, we propose to alternatively learn aircraft’s intentions from a repository of historical trajectories. Based on the observation that aircraft traveling through the same airspace may exhibit comparable behaviors, we utilize a location-adaptive threshold to identify nearby neighbors for a given query aircraft within the repository. The retrieved candidates are next filtered based on contextual information, such as landing time and landing direction, to eliminate less relevant components. The resulting set of nearby candidates are referred to as the local history, which emphasizes the modeling of aircraft’s local behavior. Moreover, an attention-based local history encoder is presented to aggregate information from all nearby candidates to generate a latent feature for capturing the aircraft’s intention. This latent feature is robust to normalized input trajectories, relative to the current location of the target aircraft, thus improving the model’s generalization capability to unseen areas. Our proposed intention modeling method is model-agnostic, which can be leveraged as an additional condition by any trajectory prediction model for improved robustness and accuracy. For evaluation, we integrate the intention modeling component into our previous diffusion-based aircraft trajectory prediction framework. We conduct experiments on two real-world aircraft trajectory datasets in both towered and non-towered terminal airspace. The experimental results show that our method captures various maneuvering patterns effectively, outperforming existing methods by a large margin in terms of both ADE and FDE.
飞机轨迹预测旨在估算飞机在场景中的未来运动轨迹,这是智能空中交通管理(如容量估算和冲突检测)的关键步骤。目前的方法主要依赖于输入绝对位置,这虽然提高了预测精度,但却限制了模型对未知环境的泛化能力。为了弥补这一不足,我们建议从历史轨迹库中学习飞机的意图。根据观察,穿越同一空域的飞机可能会表现出相似的行为,因此我们利用位置自适应阈值来识别存储库中给定查询飞机的近邻。检索到的候选对象接下来会根据上下文信息(如着陆时间和着陆方向)进行过滤,以剔除相关性较低的部分。由此产生的附近候选集被称为本地历史,它强调对飞机本地行为的建模。此外,还提出了一种基于注意力的本地历史编码器,用于汇总附近所有候选者的信息,生成捕捉飞机意图的潜在特征。相对于目标飞机的当前位置,该潜特征对归一化输入轨迹具有鲁棒性,从而提高了模型对未知区域的泛化能力。我们提出的意图建模方法与模型无关,可作为任何轨迹预测模型的附加条件加以利用,以提高鲁棒性和准确性。为了进行评估,我们将意图建模部分整合到了之前基于扩散的飞机轨迹预测框架中。我们在塔台和非塔台终端空域的两个真实世界飞机轨迹数据集上进行了实验。实验结果表明,我们的方法能有效捕捉各种机动模式,在 ADE 和 FDE 方面都远远优于现有方法。
{"title":"Aircraft trajectory prediction in terminal airspace with intentions derived from local history","authors":"Yifang Yin,&nbsp;Sheng Zhang,&nbsp;Yicheng Zhang,&nbsp;Yi Zhang,&nbsp;Shili Xiang","doi":"10.1016/j.neucom.2024.128843","DOIUrl":"10.1016/j.neucom.2024.128843","url":null,"abstract":"<div><div>Aircraft trajectory prediction aims to estimate the future movements of aircraft in a scene, which is a crucial step for intelligent air traffic management such as capacity estimation and conflict detection. Current approaches primarily rely on inputting absolute locations, which improves the prediction accuracy but limits the model’s generalization ability to unseen environments. To bridge the gap, we propose to alternatively learn aircraft’s intentions from a repository of historical trajectories. Based on the observation that aircraft traveling through the same airspace may exhibit comparable behaviors, we utilize a location-adaptive threshold to identify nearby neighbors for a given query aircraft within the repository. The retrieved candidates are next filtered based on contextual information, such as landing time and landing direction, to eliminate less relevant components. The resulting set of nearby candidates are referred to as the local history, which emphasizes the modeling of aircraft’s local behavior. Moreover, an attention-based local history encoder is presented to aggregate information from all nearby candidates to generate a latent feature for capturing the aircraft’s intention. This latent feature is robust to normalized input trajectories, relative to the current location of the target aircraft, thus improving the model’s generalization capability to unseen areas. Our proposed intention modeling method is model-agnostic, which can be leveraged as an additional condition by any trajectory prediction model for improved robustness and accuracy. For evaluation, we integrate the intention modeling component into our previous diffusion-based aircraft trajectory prediction framework. We conduct experiments on two real-world aircraft trajectory datasets in both towered and non-towered terminal airspace. The experimental results show that our method captures various maneuvering patterns effectively, outperforming existing methods by a large margin in terms of both ADE and FDE.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128843"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial contrastive representation training with external knowledge injection for zero-shot stance detection 利用外部知识注入进行逆向对比表征训练,实现零镜头姿态检测
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128849
Yifan Ding , Ying Lei , Anqi Wang , Xiangrun Liu , Tuanfei Zhu , Yizhou Li
Zero-shot stance detection (ZSSD) is a task that involves identifying the author’s perspective on specific issues in text, particularly when the target topic has not been encountered during the model training process, to address rapidly evolving topics on social media. This paper introduces a ZSSD framework named KEL-CA. To enable the model to more effectively utilize transferable stance features for representing unseen targets, the framework incorporates a multi-layer contrastive learning and adversarial domain transfer module. Unlike traditional contrastive or adversarial learning, our framework captures both correlations and distinctions between invariant and specific features, as well as between different stance labels, and enhances the generalization ability and robustness of the features. Subsequently, to address the problem of insufficient information about the target context, we designed a dual external knowledge injection module that uses a large language model (LLM) to extract external knowledge from a Wikipedia-based local knowledge base and a Chain-of-Thought (COT) process to ensure the timeliness and relevance of the knowledge to infer the stances of unseen targets. Experimental results demonstrate that our approach outperforms existing models on two benchmark datasets, thereby validating its efficacy in ZSSD tasks.
零镜头立场检测(ZSSD)是一项涉及在文本中识别作者对特定问题的观点的任务,尤其是在模型训练过程中未遇到目标话题时,以解决社交媒体上快速发展的话题。本文介绍了一个名为 KEL-CA 的 ZSSD 框架。为了使模型能更有效地利用可转移的立场特征来表示未见过的目标,该框架结合了多层对比学习和对抗领域转移模块。与传统的对比或对抗学习不同,我们的框架既能捕捉不变特征和特定特征之间的相关性和区别,也能捕捉不同姿态标签之间的相关性和区别,从而增强特征的泛化能力和鲁棒性。随后,针对目标语境信息不足的问题,我们设计了双重外部知识注入模块,利用大语言模型(LLM)从基于维基百科的本地知识库中提取外部知识,并利用思维链(COT)流程确保知识的及时性和相关性,从而推断出未见目标的立场。实验结果表明,我们的方法在两个基准数据集上的表现优于现有模型,从而验证了它在 ZSSD 任务中的有效性。
{"title":"Adversarial contrastive representation training with external knowledge injection for zero-shot stance detection","authors":"Yifan Ding ,&nbsp;Ying Lei ,&nbsp;Anqi Wang ,&nbsp;Xiangrun Liu ,&nbsp;Tuanfei Zhu ,&nbsp;Yizhou Li","doi":"10.1016/j.neucom.2024.128849","DOIUrl":"10.1016/j.neucom.2024.128849","url":null,"abstract":"<div><div>Zero-shot stance detection (ZSSD) is a task that involves identifying the author’s perspective on specific issues in text, particularly when the target topic has not been encountered during the model training process, to address rapidly evolving topics on social media. This paper introduces a ZSSD framework named KEL-CA. To enable the model to more effectively utilize transferable stance features for representing unseen targets, the framework incorporates a multi-layer contrastive learning and adversarial domain transfer module. Unlike traditional contrastive or adversarial learning, our framework captures both correlations and distinctions between invariant and specific features, as well as between different stance labels, and enhances the generalization ability and robustness of the features. Subsequently, to address the problem of insufficient information about the target context, we designed a dual external knowledge injection module that uses a large language model (LLM) to extract external knowledge from a Wikipedia-based local knowledge base and a Chain-of-Thought (COT) process to ensure the timeliness and relevance of the knowledge to infer the stances of unseen targets. Experimental results demonstrate that our approach outperforms existing models on two benchmark datasets, thereby validating its efficacy in ZSSD tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128849"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural networks for knowledge-enhanced molecular modeling 用于知识增强型分子建模的深度神经网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128838
Siyu Long , Jianyu Wu , Yi Zhou , Fan Sha , Xinyu Dai
Designing neural networks for molecular modeling is a crucial task in the field of artificial intelligence. The goal is to utilize neural networks to understand and design molecules, which has significant implications for drug development and other real-world applications. Recently, with the advancement of deep learning, molecular modeling has made considerable progress. However, current methods are primarily data-driven, overlooking the role of domain knowledge, such as molecular shapes, in the modeling process. In this paper, we systematically investigate how incorporating molecular shape knowledge can enhance molecular modeling. Specifically, we design two deep neural networks, ShapePred and ShapeGen, to utilize molecular shapes in molecule prediction and generation. Experimental results demonstrate that integrating shape knowledge can significantly improve model performance. Notably, ShapePred exhibits strong performance across 11 molecule prediction datasets, while ShapeGen can more efficiently generate high-quality drug molecules based on given target proteins.
设计用于分子建模的神经网络是人工智能领域的一项重要任务。其目标是利用神经网络理解和设计分子,这对药物开发和其他实际应用具有重要意义。最近,随着深度学习的发展,分子建模取得了长足的进步。然而,目前的方法主要是数据驱动的,忽略了分子形状等领域知识在建模过程中的作用。在本文中,我们系统地研究了如何结合分子形状知识来增强分子建模。具体来说,我们设计了两个深度神经网络 ShapePred 和 ShapeGen,在分子预测和生成中利用分子形状。实验结果表明,整合形状知识可以显著提高模型性能。值得注意的是,ShapePred 在 11 个分子预测数据集中表现出强劲的性能,而 ShapeGen 则能根据给定的目标蛋白质更高效地生成高质量的药物分子。
{"title":"Deep neural networks for knowledge-enhanced molecular modeling","authors":"Siyu Long ,&nbsp;Jianyu Wu ,&nbsp;Yi Zhou ,&nbsp;Fan Sha ,&nbsp;Xinyu Dai","doi":"10.1016/j.neucom.2024.128838","DOIUrl":"10.1016/j.neucom.2024.128838","url":null,"abstract":"<div><div>Designing neural networks for molecular modeling is a crucial task in the field of artificial intelligence. The goal is to utilize neural networks to understand and design molecules, which has significant implications for drug development and other real-world applications. Recently, with the advancement of deep learning, molecular modeling has made considerable progress. However, current methods are primarily data-driven, overlooking the role of domain knowledge, such as molecular shapes, in the modeling process. In this paper, we systematically investigate how incorporating molecular shape knowledge can enhance molecular modeling. Specifically, we design two deep neural networks, <span>ShapePred</span> and <span>ShapeGen</span>, to utilize molecular shapes in molecule prediction and generation. Experimental results demonstrate that integrating shape knowledge can significantly improve model performance. Notably, <span>ShapePred</span> exhibits strong performance across 11 molecule prediction datasets, while <span>ShapeGen</span> can more efficiently generate high-quality drug molecules based on given target proteins.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128838"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invisible and robust watermarking model based on hierarchical residual fusion multi-scale convolution 基于分层残差融合多尺度卷积的隐形鲁棒水印模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128834
Jun-Zhuo Zou , Ming-Xuan Chen , Li-Hua Gong
In current deep learning based watermarking technologies, it remains challenging to fully integrate the features of watermark and cover image. Most watermarking models with fixed-size kernel convolution exhibit restricted feature extraction ability, leading to incomplete feature fusion. To address this issue, a hierarchical residual fusion multi-scale convolution (HRFMS) module is designed. The method extracts image features from various receptive fields and implements feature interaction by residual connection. To produce watermarked image with high visual quality and attack resistance, a watermarking model based on the HRFMS is devised to achieve multi-scale feature fusion. Moreover, to minimize image distortion caused by watermark information, an attention mask layer is designed to guide the distribution of watermark information. The experimental results demonstrate that the invisibility and the robustness of the HRFMSNet are excellent. The watermarked images generated by the HRFMSNet are nearly visually indistinguishable from the cover images. The average peak signal-to-noise ratio of the watermarked images is 37.13 dB, and most of the bit error rates of the decoded messages are below 0.02.
在当前基于深度学习的水印技术中,将水印和封面图像的特征完全融合仍是一项挑战。大多数采用固定大小核卷积的水印模型的特征提取能力有限,导致特征融合不完整。为解决这一问题,设计了分层残差融合多尺度卷积(HRFMS)模块。该方法从不同的感受野中提取图像特征,并通过残差连接实现特征交互。为了生成具有高视觉质量和抗攻击性的水印图像,设计了一种基于 HRFMS 的水印模型,以实现多尺度特征融合。此外,为了尽量减少水印信息造成的图像失真,还设计了一个注意力掩码层来引导水印信息的分布。实验结果表明,HRFMSNet 具有良好的隐蔽性和鲁棒性。HRFMSNet 生成的水印图像与封面图像在视觉上几乎无法区分。水印图像的平均峰值信噪比为 37.13 dB,大多数解码信息的误码率低于 0.02。
{"title":"Invisible and robust watermarking model based on hierarchical residual fusion multi-scale convolution","authors":"Jun-Zhuo Zou ,&nbsp;Ming-Xuan Chen ,&nbsp;Li-Hua Gong","doi":"10.1016/j.neucom.2024.128834","DOIUrl":"10.1016/j.neucom.2024.128834","url":null,"abstract":"<div><div>In current deep learning based watermarking technologies, it remains challenging to fully integrate the features of watermark and cover image. Most watermarking models with fixed-size kernel convolution exhibit restricted feature extraction ability, leading to incomplete feature fusion. To address this issue, a hierarchical residual fusion multi-scale convolution (HRFMS) module is designed. The method extracts image features from various receptive fields and implements feature interaction by residual connection. To produce watermarked image with high visual quality and attack resistance, a watermarking model based on the HRFMS is devised to achieve multi-scale feature fusion. Moreover, to minimize image distortion caused by watermark information, an attention mask layer is designed to guide the distribution of watermark information. The experimental results demonstrate that the invisibility and the robustness of the HRFMSNet are excellent. The watermarked images generated by the HRFMSNet are nearly visually indistinguishable from the cover images. The average peak signal-to-noise ratio of the watermarked images is 37.13 dB, and most of the bit error rates of the decoded messages are below 0.02.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128834"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-attribute dynamic attenuation learning improved spiking actor network 多属性动态衰减学习改进型尖峰行动者网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128819
Rong Xiao, Zhiyuan Hu, Jie Zhang, Chenwei Tang, Jiancheng Lv
Deep reinforcement learning (DRL) has shown promising results in solving robotic control and decision tasks, which can learn the high-dimensional state and action information well. Despite their successes, conventional neural-based DRL models are criticized for low energy efficiency, making them laborious to be widely applied in low-power electronics. With more biologically plausible plasticity principles, spiking neural networks (SNNs) are now considered an energy-efficient and robust alternative. The most existing dynamics and learning paradigms for spiking neurons with a common Leaky Integrate-and-Fire (LIF) neuron model often result in relatively low efficiency and poor robustness. To address these limitations, we propose a multi-attribute dynamic attenuation learning improved spiking actor network (MADA-SAN) for reinforcement learning to achieve effective decision-making. The resistance, membrane voltage and membrane current of spiking neurons are updated from a fixed value into dynamic attenuation. By enhancing the temporal relation dependencies in neurons, this model can learn the spatio-temporal relevance of complex continuous information well. Extensive experimental results show MADA-SAN performs better than its counterpart deep actor network on six continuous control tasks from OpenAI gym. Besides, we further validated the proposed MADA-LIF can achieve comparable performance with other state-of-the-art algorithms on MNIST and DVS-gesture recognition tasks.
深度强化学习(DRL)能够很好地学习高维状态和动作信息,在解决机器人控制和决策任务方面取得了可喜的成果。尽管取得了成功,但传统的基于神经的 DRL 模型因能效低而饱受诟病,使其难以在低功耗电子设备中广泛应用。尖峰神经网络(SNN)的可塑性原理更符合生物学原理,因此现在被认为是一种高能效、稳健的替代方案。现有的大多数尖峰神经元动力学和学习范式都采用常见的 "漏电积分-放电(LIF)"神经元模型,这往往导致效率相对较低,鲁棒性较差。针对这些局限性,我们提出了一种用于强化学习的多属性动态衰减学习改进型尖峰行为网络(MADA-SAN),以实现有效决策。尖峰神经元的电阻、膜电压和膜电流从固定值更新为动态衰减。通过增强神经元的时间关系依赖性,该模型可以很好地学习复杂连续信息的时空相关性。广泛的实验结果表明,MADA-SAN 在 OpenAI gym 的六个连续控制任务上的表现优于其对应的深度演员网络。此外,我们还进一步验证了所提出的 MADA-LIF 可以在 MNIST 和 DVS 手势识别任务上实现与其他最先进算法相当的性能。
{"title":"Multi-attribute dynamic attenuation learning improved spiking actor network","authors":"Rong Xiao,&nbsp;Zhiyuan Hu,&nbsp;Jie Zhang,&nbsp;Chenwei Tang,&nbsp;Jiancheng Lv","doi":"10.1016/j.neucom.2024.128819","DOIUrl":"10.1016/j.neucom.2024.128819","url":null,"abstract":"<div><div>Deep reinforcement learning (DRL) has shown promising results in solving robotic control and decision tasks, which can learn the high-dimensional state and action information well. Despite their successes, conventional neural-based DRL models are criticized for low energy efficiency, making them laborious to be widely applied in low-power electronics. With more biologically plausible plasticity principles, spiking neural networks (SNNs) are now considered an energy-efficient and robust alternative. The most existing dynamics and learning paradigms for spiking neurons with a common Leaky Integrate-and-Fire (LIF) neuron model often result in relatively low efficiency and poor robustness. To address these limitations, we propose a multi-attribute dynamic attenuation learning improved spiking actor network (MADA-SAN) for reinforcement learning to achieve effective decision-making. The resistance, membrane voltage and membrane current of spiking neurons are updated from a fixed value into dynamic attenuation. By enhancing the temporal relation dependencies in neurons, this model can learn the spatio-temporal relevance of complex continuous information well. Extensive experimental results show MADA-SAN performs better than its counterpart deep actor network on six continuous control tasks from OpenAI gym. Besides, we further validated the proposed MADA-LIF can achieve comparable performance with other state-of-the-art algorithms on MNIST and DVS-gesture recognition tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128819"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Token Embeddings Augmentation benefits Parameter-Efficient Fine-Tuning under long-tailed distribution 长尾分布下的代币嵌入增强效益参数高效微调
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-08 DOI: 10.1016/j.neucom.2024.128853
Weiqiu Wang , Zining Chen , Zhicheng Zhao , Fei Su
Pre-trained vision-language models, particularly those utilizing CLIP, have advanced various visual tasks. Parameter-Efficient Fine-Tuning (PEFT) on such models is a mainstream trend for downstream tasks. Despite advancements, long-tailed distribution still hampers image recognition performance in current PEFT schemes. Therefore, this paper proposes Token Embeddings Augmentation (TEA) to tackle long-tailed learning under PEFT paradigm. Based on patch token semantic mining, TEA uncovers category-specific semantic details within patch tokens to enhance token embeddings, named Patch-based Embeddings Augmentation (PEA). Then, a Probability Gate (PG) strategy is designed to effectively enrich semantic information of tail categories using enhanced embeddings. A Token Embeddings Consistency (TEC) loss is further introduced to prioritize category semantic information within tokens. Extensive experiments on multiple long-tailed distribution datasets show that our method improves the performance of various PEFT methods with different classification loss functions, especially for tail categories. Our optimal approach achieves the state-of-the-art results on multiple datasets with negligible parameters or inference latency, thus enhancing the practicality of PEFT in long-tailed distributions.
预先训练的视觉语言模型,尤其是那些利用 CLIP 的模型,推动了各种视觉任务的发展。对这些模型进行参数高效微调(PEFT)是下游任务的主流趋势。尽管取得了进步,但在当前的 PEFT 方案中,长尾分布仍会影响图像识别性能。因此,本文提出了令牌嵌入增强技术(TEA)来解决 PEFT 范式下的长尾学习问题。TEA 以补丁标记语义挖掘为基础,在补丁标记中发现特定类别的语义细节,从而增强标记嵌入,命名为基于补丁的嵌入增强(PEA)。然后,设计了一种概率门(PG)策略,利用增强嵌入有效地丰富尾部类别的语义信息。还进一步引入了标记嵌入一致性(TEC)损失,以优先处理标记内的类别语义信息。在多个长尾分布数据集上进行的大量实验表明,我们的方法提高了采用不同分类损失函数的各种 PEFT 方法的性能,尤其是在尾部类别方面。我们的最优方法在多个数据集上取得了最先进的结果,其参数或推理延迟可以忽略不计,从而提高了长尾分布中 PEFT 的实用性。
{"title":"Token Embeddings Augmentation benefits Parameter-Efficient Fine-Tuning under long-tailed distribution","authors":"Weiqiu Wang ,&nbsp;Zining Chen ,&nbsp;Zhicheng Zhao ,&nbsp;Fei Su","doi":"10.1016/j.neucom.2024.128853","DOIUrl":"10.1016/j.neucom.2024.128853","url":null,"abstract":"<div><div>Pre-trained vision-language models, particularly those utilizing CLIP, have advanced various visual tasks. Parameter-Efficient Fine-Tuning (PEFT) on such models is a mainstream trend for downstream tasks. Despite advancements, long-tailed distribution still hampers image recognition performance in current PEFT schemes. Therefore, this paper proposes Token Embeddings Augmentation (TEA) to tackle long-tailed learning under PEFT paradigm. Based on patch token semantic mining, TEA uncovers category-specific semantic details within patch tokens to enhance token embeddings, named Patch-based Embeddings Augmentation (PEA). Then, a Probability Gate (PG) strategy is designed to effectively enrich semantic information of tail categories using enhanced embeddings. A Token Embeddings Consistency (TEC) loss is further introduced to prioritize category semantic information within tokens. Extensive experiments on multiple long-tailed distribution datasets show that our method improves the performance of various PEFT methods with different classification loss functions, especially for tail categories. Our optimal approach achieves the state-of-the-art results on multiple datasets with negligible parameters or inference latency, thus enhancing the practicality of PEFT in long-tailed distributions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128853"},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1