首页 > 最新文献

Neurocomputing最新文献

英文 中文
MoIRA: Modular instruction routing architecture for multi-task robotics MoIRA:多任务机器人的模块化指令路由架构
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132962
Dmytro Kuzmenko , Nadiya Shvai
Mixture-of-Experts (MoE) approaches have gained traction in robotics for their ability to dynamically allocate resources and specialize sub-networks. However, such systems typically rely on monolithic architectures with rigid, learned internal routing, which prevents selective expert customization and necessitates expensive joint training. We propose MoIRA, an architecture-agnostic modular framework that coordinates decoupled experts via an external, zero-shot text router. MoIRA employs two routing strategies: embedding-based similarity and prompt-driven language model inference. Leveraging Gr00t-N1 and π0 Vision-Language-Action models with low-rank adapters, we evaluate MoIRA on GR1 Humanoid tasks and LIBERO benchmarks. Our approach consistently outperforms generalist models and competes with fully trained MoE pipelines. Furthermore, we demonstrate system robustness against instruction perturbations. By relying on textual descriptions for zero-shot orchestration, MoIRA proves the viability of modular deployment and offers a scalable, flexible foundation for multi-expert robotic systems.
专家混合(MoE)方法因其动态分配资源和专门化子网络的能力而在机器人领域获得了广泛的应用。然而,这种系统通常依赖于具有严格的、可学习的内部路由的单片架构,这阻止了有选择性的专家定制,并且需要昂贵的联合培训。我们提出了MoIRA,一个架构无关的模块化框架,通过一个外部的零射文本路由器来协调解耦的专家。MoIRA采用两种路由策略:基于嵌入的相似性和提示驱动的语言模型推理。利用Gr00t-N1和π - 0视觉语言-动作模型和低等级适配器,我们在GR1 Humanoid任务和LIBERO基准上对MoIRA进行了评估。我们的方法始终优于通才模型,并与训练有素的MoE管道竞争。此外,我们证明了系统对指令扰动的鲁棒性。通过依赖于零射击编排的文本描述,MoIRA证明了模块化部署的可行性,并为多专家机器人系统提供了可扩展、灵活的基础。
{"title":"MoIRA: Modular instruction routing architecture for multi-task robotics","authors":"Dmytro Kuzmenko ,&nbsp;Nadiya Shvai","doi":"10.1016/j.neucom.2026.132962","DOIUrl":"10.1016/j.neucom.2026.132962","url":null,"abstract":"<div><div>Mixture-of-Experts (MoE) approaches have gained traction in robotics for their ability to dynamically allocate resources and specialize sub-networks. However, such systems typically rely on monolithic architectures with rigid, learned internal routing, which prevents selective expert customization and necessitates expensive joint training. We propose MoIRA, an architecture-agnostic modular framework that coordinates decoupled experts via an external, zero-shot text router. MoIRA employs two routing strategies: embedding-based similarity and prompt-driven language model inference. Leveraging Gr00t-N1 and <span><math><msub><mi>π</mi><mrow><mn>0</mn></mrow></msub></math></span> Vision-Language-Action models with low-rank adapters, we evaluate MoIRA on GR1 Humanoid tasks and LIBERO benchmarks. Our approach consistently outperforms generalist models and competes with fully trained MoE pipelines. Furthermore, we demonstrate system robustness against instruction perturbations. By relying on textual descriptions for zero-shot orchestration, MoIRA proves the viability of modular deployment and offers a scalable, flexible foundation for multi-expert robotic systems.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132962"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained image classification driven by Gaussian sampling and metric learning 基于高斯采样和度量学习的细粒度图像分类
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132966
Ying Xu, Dexin Zhang, Dasen Cai
Triplet loss is widely used in classification tasks, especially in fine-grained image classification. However, there is a large proportion of simple triplet samples in the fine-grained image classification process, and the use of original triplet loss cannot fully utilize the data information to update network parameters. This study proposes a metric learning method called Triplet Loss with Gaussian Sampling Uncertainty (TL-GSU), which aims to capture fine-grained features with uncertainty in the data. Specifically, TL-GSU reformulates the triplet loss framework by modeling each anchor example using the data distribution of another example with the same class, represented by a multidimensional Gaussian distribution. The proposed loss function TL-GSU is defined as the expected value of the classical triplet loss, where anchor samples are taken from a multivariate Gaussian distribution derived from the training set. In addition, an improved feature reduction structure is proposed to reduce computational costs in the fine-grained visual classification pipeline. The proposed TL-GSU is comprehensively validated on three datasets: Stanford Cars, Stanford Dogs, and CUB-200–2011. The results demonstrate the effectiveness of the proposed approach.
在分类任务中,特别是在细粒度图像分类中,三元组损失得到了广泛的应用。然而,在细粒度图像分类过程中存在很大比例的简单三联体样本,使用原始三联体损失不能充分利用数据信息更新网络参数。本研究提出了一种度量学习方法,称为具有高斯采样不确定性的三重态损失(TL-GSU),旨在捕获数据中具有不确定性的细粒度特征。具体而言,TL-GSU通过使用具有同一类的另一个示例的数据分布对每个锚点示例进行建模,以多维高斯分布表示,从而重新制定了三重损失框架。所提出的损失函数TL-GSU被定义为经典三重损失的期望值,其中锚点样本取自取自训练集的多元高斯分布。此外,提出了一种改进的特征约简结构,以减少细粒度视觉分类管道的计算成本。提出的TL-GSU在三个数据集上进行了全面验证:Stanford Cars, Stanford Dogs和CUB-200-2011。结果表明了该方法的有效性。
{"title":"Fine-grained image classification driven by Gaussian sampling and metric learning","authors":"Ying Xu,&nbsp;Dexin Zhang,&nbsp;Dasen Cai","doi":"10.1016/j.neucom.2026.132966","DOIUrl":"10.1016/j.neucom.2026.132966","url":null,"abstract":"<div><div>Triplet loss is widely used in classification tasks, especially in fine-grained image classification. However, there is a large proportion of simple triplet samples in the fine-grained image classification process, and the use of original triplet loss cannot fully utilize the data information to update network parameters. This study proposes a metric learning method called Triplet Loss with Gaussian Sampling Uncertainty (TL-GSU), which aims to capture fine-grained features with uncertainty in the data. Specifically, TL-GSU reformulates the triplet loss framework by modeling each anchor example using the data distribution of another example with the same class, represented by a multidimensional Gaussian distribution. The proposed loss function TL-GSU is defined as the expected value of the classical triplet loss, where anchor samples are taken from a multivariate Gaussian distribution derived from the training set. In addition, an improved feature reduction structure is proposed to reduce computational costs in the fine-grained visual classification pipeline. The proposed TL-GSU is comprehensively validated on three datasets: Stanford Cars, Stanford Dogs, and CUB-200–2011. The results demonstrate the effectiveness of the proposed approach.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132966"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-centric community hiding based on permanence in attributed networks 基于属性网络持久性的边缘中心社区隐藏
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132924
Zhichao Feng , Bohan Zhang , Junchang Jing , Dong Liu
Attributed networks contain both structural connections and rich node attributes, which are crucial for the formation and identification of community structures. Although integrating attribute data enhances the accuracy of community detection algorithms, it also raises the risk of privacy leakage. To address this issue, community hiding has emerged as a promising solution. However, most existing research has centered on topological networks, leaving attributed networks largely unexplored. In response to these issues, we propose Attribute Permanence (APERM)—a novel community hiding method specifically designed for attributed networks, which quantifies permanence loss to identify structurally influential edges for perturbation. The objective of our perturbation strategy is to disrupt the global community structure, which typically involves considering all existing and potential edges in the network, and this introduces considerable computational complexity. To tackle this problem, we introduce a strategy that identifies Closely Homogeneous Nodes (CHN) by integrating both structural similarity and attribute information, thereby significantly reducing the edge perturbation search space. The experimental results from eight community detection algorithms (four for attributed networks and four for non-attributed networks) across six real-world datasets demonstrate that our proposed APERM algorithm not only achieves effective community hiding but also retains robust performance.
属性网络既包含结构连接,又包含丰富的节点属性,这对社区结构的形成和识别至关重要。虽然整合属性数据提高了社区检测算法的准确性,但也增加了隐私泄露的风险。为了解决这个问题,社区隐藏已经成为一个有希望的解决方案。然而,大多数现有的研究都集中在拓扑网络上,使得属性网络在很大程度上未被探索。针对这些问题,我们提出了属性持久性(APERM)——一种专门为属性网络设计的新型社区隐藏方法,该方法量化持久性损失以识别扰动的结构影响边。我们的扰动策略的目标是破坏全球社区结构,这通常涉及考虑网络中所有现有和潜在的边,这引入了相当大的计算复杂性。为了解决这一问题,我们引入了一种通过整合结构相似性和属性信息来识别紧密同构节点(CHN)的策略,从而显著减少了边缘扰动搜索空间。基于6个真实数据集的8种社区检测算法(4种用于属性网络,4种用于非属性网络)的实验结果表明,我们提出的APERM算法不仅实现了有效的社区隐藏,而且保持了鲁棒性。
{"title":"Edge-centric community hiding based on permanence in attributed networks","authors":"Zhichao Feng ,&nbsp;Bohan Zhang ,&nbsp;Junchang Jing ,&nbsp;Dong Liu","doi":"10.1016/j.neucom.2026.132924","DOIUrl":"10.1016/j.neucom.2026.132924","url":null,"abstract":"<div><div>Attributed networks contain both structural connections and rich node attributes, which are crucial for the formation and identification of community structures. Although integrating attribute data enhances the accuracy of community detection algorithms, it also raises the risk of privacy leakage. To address this issue, community hiding has emerged as a promising solution. However, most existing research has centered on topological networks, leaving attributed networks largely unexplored. In response to these issues, we propose Attribute Permanence (APERM)—a novel community hiding method specifically designed for attributed networks, which quantifies permanence loss to identify structurally influential edges for perturbation. The objective of our perturbation strategy is to disrupt the global community structure, which typically involves considering all existing and potential edges in the network, and this introduces considerable computational complexity. To tackle this problem, we introduce a strategy that identifies Closely Homogeneous Nodes (CHN) by integrating both structural similarity and attribute information, thereby significantly reducing the edge perturbation search space. The experimental results from eight community detection algorithms (four for attributed networks and four for non-attributed networks) across six real-world datasets demonstrate that our proposed APERM algorithm not only achieves effective community hiding but also retains robust performance.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132924"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth aware image compression with multi-reference dynamic entropy model 基于多参考动态熵模型的深度感知图像压缩
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132971
Jingyi He, Yongjun Li, Yifei Liang, Mengyan Lu, Haorui Liu, Jixing Zhou, Yi Wei, Hongyan Liu
To overcome the limitations of static feature extraction and inefficient context modeling in existing learned image compression, this paper proposes an image compression algorithm that integrates Depth-aware Adaptive Transformation (DAT) framework and Multi-reference Dynamic Entropy Model (MDEM). A proposed Multi-scale Capacity-aware Feature Enhancer (MCFE) model is adaptively embedded into the network to enhance feature extraction capability. The DAT architecture integrates a variational autoencoder framework with MCFE to increase the density of latent representations. Furthermore, an improved soft-threshold sparse attention mechanism is combined with a multi-context model, incorporating adaptive weights to eliminate spatial redundancy in the latent representations across local, non-local, and global dimensions, while channel context is introduced to capture channel dependencies. Building upon this, the MDEM integrates the side information provided by DAT along with spatial and channel context information and employs a channel-wise autoregressive model to achieve accurate pixel estimation for precise entropy probability estimation, which improves compression performance. Evaluated on the Kodak, Tecnick, and CLIC(Challenge on Learned Image Compression) Professional Validation datasets, the proposed method achieves BD-rate(Bjøntegaard Delta rate) gains of 7.75%, 9.33%, and 5.73%, respectively, compared to the VTM(Versatile Video Coding Test Model)-17.0 benchmark. Therefore, the proposed algorithm overcomes the limitations of fixed-context and static feature extraction strategies, enabling precise probability estimation and superior compression performance through dynamic resource allocation and multi-dimensional contextual modeling.
为了克服现有学习图像压缩中静态特征提取和上下文建模效率低下的局限性,提出了一种融合深度感知自适应变换(DAT)框架和多参考动态熵模型(MDEM)的图像压缩算法。提出了一种自适应嵌入网络的多尺度容量感知特征增强器(MCFE)模型,以增强特征提取能力。DAT架构将变分自编码器框架与MCFE集成在一起,以增加潜在表示的密度。此外,将改进的软阈值稀疏注意机制与多上下文模型相结合,结合自适应权重来消除局部、非局部和全局维度潜在表征中的空间冗余,同时引入通道上下文来捕获通道依赖性。在此基础上,MDEM集成了DAT提供的侧信息以及空间和信道上下文信息,并采用信道自回归模型实现精确的像素估计,以实现精确的熵概率估计,从而提高了压缩性能。在Kodak, Tecnick和CLIC(Challenge on Learned Image Compression) Professional Validation数据集上进行了评估,与VTM(Versatile Video Coding Test Model)-17.0基准相比,该方法的BD-rate(Bjøntegaard Delta rate)分别提高了7.75%,9.33%和5.73%。因此,该算法克服了固定上下文和静态特征提取策略的局限性,通过动态资源分配和多维上下文建模实现了精确的概率估计和优越的压缩性能。
{"title":"Depth aware image compression with multi-reference dynamic entropy model","authors":"Jingyi He,&nbsp;Yongjun Li,&nbsp;Yifei Liang,&nbsp;Mengyan Lu,&nbsp;Haorui Liu,&nbsp;Jixing Zhou,&nbsp;Yi Wei,&nbsp;Hongyan Liu","doi":"10.1016/j.neucom.2026.132971","DOIUrl":"10.1016/j.neucom.2026.132971","url":null,"abstract":"<div><div>To overcome the limitations of static feature extraction and inefficient context modeling in existing learned image compression, this paper proposes an image compression algorithm that integrates Depth-aware Adaptive Transformation (DAT) framework and Multi-reference Dynamic Entropy Model (MDEM). A proposed Multi-scale Capacity-aware Feature Enhancer (MCFE) model is adaptively embedded into the network to enhance feature extraction capability. The DAT architecture integrates a variational autoencoder framework with MCFE to increase the density of latent representations. Furthermore, an improved soft-threshold sparse attention mechanism is combined with a multi-context model, incorporating adaptive weights to eliminate spatial redundancy in the latent representations across local, non-local, and global dimensions, while channel context is introduced to capture channel dependencies. Building upon this, the MDEM integrates the side information provided by DAT along with spatial and channel context information and employs a channel-wise autoregressive model to achieve accurate pixel estimation for precise entropy probability estimation, which improves compression performance. Evaluated on the Kodak, Tecnick, and CLIC(Challenge on Learned Image Compression) Professional Validation datasets, the proposed method achieves BD-rate(Bjøntegaard Delta rate) gains of <span><math><mn>7.75</mn><mi>%</mi></math></span>, <span><math><mn>9.33</mn><mi>%</mi></math></span>, and <span><math><mn>5.73</mn><mi>%</mi></math></span>, respectively, compared to the VTM(Versatile Video Coding Test Model)-17.0 benchmark. Therefore, the proposed algorithm overcomes the limitations of fixed-context and static feature extraction strategies, enabling precise probability estimation and superior compression performance through dynamic resource allocation and multi-dimensional contextual modeling.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132971"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HierLoRA: A hierarchical multi-concept learning approach with enhanced LoRA for personalized image diffusion models HierLoRA:一种针对个性化图像扩散模型的分层多概念学习方法,具有增强的LoRA
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132927
Yongjie Niu , Pengbo Zhou , Rui Zhou , Mingquan Zhou
Personalized image generation, a key application of diffusion models, holds significant importance for the advancement of computer vision, artistic creation, and content generation technologies. However, existing diffusion models fine-tuned with Low-Rank Adaptation (LoRA) face multiple challenges when learning novel concepts: language drift undermines the generation quality of new concepts in novel contexts; the entanglement of object features with other elements in reference images leads to misalignment between the learning target and its unique identifier; and traditional LoRA approaches are limited to learning only one concept at a time. To address these issues, this study proposes a novel hierarchical learning strategy and an enhanced LoRA module. Specifically, we incorporate the GeLU activation function into the LoRA architecture as a nonlinear transformation to effectively mitigate language drift. Furthermore, a gated hierarchical learning mechanism is designed to achieve inter-concept disentanglement, enabling a single LoRA module to learn multiple concepts concurrently. Experimental results across multiple random seeds demonstrate that our approach achieves a 4%–6% improvement in memory retention metrics and outperforms state-of-the-art methods in object fidelity and style similarity by approximately 12.5% and 10%, respectively. In addition to superior generation quality, our method demonstrates high computational efficiency, requiring significantly fewer trainable parameters (45M) compared to existing baselines. While preserving critical features of target objects and maintaining the model’s original capabilities, our method enables the generation of images across diverse scenes in new styles. In scenarios requiring the simultaneous learning of multiple concepts, this study not only presents a novel solution to the multi-concept learning problem in personalized diffusion model training but also lays a technical foundation for high-quality customized AI image generation and diverse visual content creation. The source code is publicly available at https://github.com/ydniuyongjie/HierLoRA/tree/main.
个性化图像生成是扩散模型的关键应用,对计算机视觉、艺术创作和内容生成技术的进步具有重要意义。然而,现有的经低秩自适应(LoRA)微调的扩散模型在学习新概念时面临多重挑战:语言漂移破坏了新语境下新概念的生成质量;对象特征与参考图像中其他元素的纠缠导致学习目标与其唯一标识符之间的不对齐;而传统的LoRA方法一次只能学习一个概念。为了解决这些问题,本研究提出了一种新的分层学习策略和增强的LoRA模块。具体来说,我们将GeLU激活函数作为非线性转换合并到LoRA体系结构中,以有效地减轻语言漂移。此外,设计了一种门控分层学习机制来实现概念间的解纠缠,使单个LoRA模块能够同时学习多个概念。跨多个随机种子的实验结果表明,我们的方法在记忆保留指标上实现了4%-6%的改进,并且在对象保真度和风格相似性方面分别优于最先进的方法约12.5%和10%。除了优越的生成质量外,我们的方法还显示出高计算效率,与现有基线相比,需要更少的可训练参数(~ 45M)。在保留目标物体的关键特征和保持模型的原始功能的同时,我们的方法能够以新的风格生成不同场景的图像。在需要同时学习多个概念的场景中,本研究不仅为个性化扩散模型训练中的多概念学习问题提供了新颖的解决方案,而且为高质量的定制AI图像生成和多样化的视觉内容创作奠定了技术基础。源代码可在https://github.com/ydniuyongjie/HierLoRA/tree/main上公开获得。
{"title":"HierLoRA: A hierarchical multi-concept learning approach with enhanced LoRA for personalized image diffusion models","authors":"Yongjie Niu ,&nbsp;Pengbo Zhou ,&nbsp;Rui Zhou ,&nbsp;Mingquan Zhou","doi":"10.1016/j.neucom.2026.132927","DOIUrl":"10.1016/j.neucom.2026.132927","url":null,"abstract":"<div><div>Personalized image generation, a key application of diffusion models, holds significant importance for the advancement of computer vision, artistic creation, and content generation technologies. However, existing diffusion models fine-tuned with Low-Rank Adaptation (LoRA) face multiple challenges when learning novel concepts: language drift undermines the generation quality of new concepts in novel contexts; the entanglement of object features with other elements in reference images leads to misalignment between the learning target and its unique identifier; and traditional LoRA approaches are limited to learning only one concept at a time. To address these issues, this study proposes a novel hierarchical learning strategy and an enhanced LoRA module. Specifically, we incorporate the GeLU activation function into the LoRA architecture as a nonlinear transformation to effectively mitigate language drift. Furthermore, a gated hierarchical learning mechanism is designed to achieve inter-concept disentanglement, enabling a single LoRA module to learn multiple concepts concurrently. Experimental results across multiple random seeds demonstrate that our approach achieves a 4%–6% improvement in memory retention metrics and outperforms state-of-the-art methods in object fidelity and style similarity by approximately 12.5% and 10%, respectively. In addition to superior generation quality, our method demonstrates high computational efficiency, requiring significantly fewer trainable parameters (<span><math><mo>∼</mo></math></span>45M) compared to existing baselines. While preserving critical features of target objects and maintaining the model’s original capabilities, our method enables the generation of images across diverse scenes in new styles. In scenarios requiring the simultaneous learning of multiple concepts, this study not only presents a novel solution to the multi-concept learning problem in personalized diffusion model training but also lays a technical foundation for high-quality customized AI image generation and diverse visual content creation. <strong>The source code is publicly available at</strong> <span><span><strong>https://github.com/ydniuyongjie/HierLoRA/tree/main</strong></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132927"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing the whole in the parts with self-supervised representation learning 用自监督表示学习在局部中看到整体
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132973
Arthur Aubret , Cèline Teulière , Jochen Triesch
Humans learn to recognize categories of objects, even when exposed to minimal language supervision. Behavioral studies and the successes of self-supervised learning (SSL) models suggest that this learning may hinge on modeling spatial regularities of visual features. However, SSL models rely on geometric image augmentations such as masking portions of an image or aggressively cropping it, which are not known to be performed by the brain. Here, we propose CO-SSL, an alternative to geometric image augmentations to model spatial co-occurrences. CO-SSL aligns local representations (before pooling) with a global image representation. Combined with a neural network endowed with small receptive fields, we show that it outperforms previous methods by up to 43.4% on ImageNet-1k when not using cropping augmentations. In addition, CO-SSL can be combined with cropping image augmentations to accelerate category learning and increase the robustness to internal corruptions and small adversarial attacks. Overall, our work paves the way towards a new approach for modeling biological learning and developing self-supervised representations in artificial systems.
人类学会识别物体的类别,即使是在极少的语言监督下。行为研究和自监督学习(SSL)模型的成功表明,这种学习可能取决于对视觉特征的空间规律的建模。然而,SSL模型依赖于几何图像增强,例如遮蔽图像的部分或大幅裁剪图像,这些都不是由大脑执行的。在这里,我们提出了CO-SSL,一种替代几何图像增强来模拟空间共现。CO-SSL将本地表示(在池化之前)与全局图像表示对齐。结合具有小接受域的神经网络,我们发现当不使用裁剪增强时,它在ImageNet-1k上的性能比以前的方法高出43.4%。此外,CO-SSL可以与裁剪图像增强相结合,以加速类别学习并增加对内部损坏和小型对抗性攻击的鲁棒性。总的来说,我们的工作为在人工系统中建模生物学习和开发自监督表示的新方法铺平了道路。
{"title":"Seeing the whole in the parts with self-supervised representation learning","authors":"Arthur Aubret ,&nbsp;Cèline Teulière ,&nbsp;Jochen Triesch","doi":"10.1016/j.neucom.2026.132973","DOIUrl":"10.1016/j.neucom.2026.132973","url":null,"abstract":"<div><div>Humans learn to recognize categories of objects, even when exposed to minimal language supervision. Behavioral studies and the successes of self-supervised learning (SSL) models suggest that this learning may hinge on modeling spatial regularities of visual features. However, SSL models rely on geometric image augmentations such as masking portions of an image or aggressively cropping it, which are not known to be performed by the brain. Here, we propose CO-SSL, an alternative to geometric image augmentations to model spatial co-occurrences. CO-SSL aligns local representations (before pooling) with a global image representation. Combined with a neural network endowed with small receptive fields, we show that it outperforms previous methods by up to <span><math><mn>43.4</mn><mi>%</mi></math></span> on ImageNet-1k when not using cropping augmentations. In addition, CO-SSL can be combined with cropping image augmentations to accelerate category learning and increase the robustness to internal corruptions and small adversarial attacks. Overall, our work paves the way towards a new approach for modeling biological learning and developing self-supervised representations in artificial systems.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132973"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian approach to tensor networks 张量网络的贝叶斯方法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132961
Erdong Guo, David Draper
Bayesian statistical learning is a powerful paradigm for inference and prediction, which integrates internal information (sampling distribution of training data) and external information (prior knowledge or background information) within a logically consistent probabilistic framework. In addition, the posterior distribution and the posterior predictive (marginal) distribution derived from the Bayes’ rule summarize the entire information required for inference and prediction, respectively. In this work, we investigate the Bayesian framework of the Tensor Network (BTN) from two perspectives. First, for the inference step, we propose an effective initialization scheme for the BTN parameters, which significantly improves the robustness and efficiency of the training procedure and leads to improved test performance. Second, in the prediction stage, we consider the Gaussian prior of the weights in BTN and predict the labels of the new observations using the posterior predictive (marginal) distribution. We derive the approximation of the posterior predictive distribution using the Laplace approximation where the out-product approximation of the Hessian matrix of the posterior distribution is applied. In the numerical experiments, we evaluate the performance of our initialization strategy and demonstrate its advantages by comparing it with other popular initialization methods including He initialization, Xavier initialization and Haliassos initialization methods on California House Price (CHP), Breast Cancer (BC), Phishing Website (PW), MNIST, Fashion-MNIST (FMNIST), SVHN and CIFAR-10 datasets. We further examine the characteristics of BTN by showing its parameters and decision boundaries trained on the two-dimensional synthetic dataset. The performance of BTN is thoroughly analyzed from two perspectives: the generalization and calibration. Through the experiments on a variety of aforementioned datasets, we demonstrate the superior performance of BTN both in generalization and calibration compared to regular TN based learning models. This demonstrates the potential of the Bayesian formalism in the development of more powerful TN-based learning models.
贝叶斯统计学习是一种强大的推理和预测范式,它将内部信息(训练数据的抽样分布)和外部信息(先验知识或背景信息)集成在逻辑一致的概率框架内。此外,由贝叶斯规则导出的后验分布和后验预测(边际)分布分别总结了推理和预测所需的全部信息。在这项工作中,我们从两个角度研究了张量网络(BTN)的贝叶斯框架。首先,在推理步骤中,我们提出了一种有效的BTN参数初始化方案,显著提高了训练过程的鲁棒性和效率,从而提高了测试性能。其次,在预测阶段,我们考虑BTN中权重的高斯先验,并使用后验预测(边际)分布预测新观测值的标签。我们用拉普拉斯近似推导了后验预测分布的近似,其中应用了后验分布的Hessian矩阵的外积近似。在数值实验中,我们评估了我们的初始化策略的性能,并通过将其与其他流行的初始化方法(包括He初始化、Xavier初始化和halassos初始化方法)在加利福尼亚房价(CHP)、乳腺癌(BC)、钓鱼网站(PW)、MNIST、Fashion-MNIST (FMNIST)、SVHN和CIFAR-10数据集上的性能进行了比较,证明了它的优势。通过展示在二维合成数据集上训练的BTN参数和决策边界,我们进一步研究了BTN的特征。从泛化和定标两个方面深入分析了BTN的性能。通过在上述各种数据集上的实验,我们证明了与常规的基于TN的学习模型相比,BTN在泛化和校准方面都具有优越的性能。这证明了贝叶斯形式主义在开发更强大的基于tn的学习模型方面的潜力。
{"title":"A Bayesian approach to tensor networks","authors":"Erdong Guo,&nbsp;David Draper","doi":"10.1016/j.neucom.2026.132961","DOIUrl":"10.1016/j.neucom.2026.132961","url":null,"abstract":"<div><div>Bayesian statistical learning is a powerful paradigm for inference and prediction, which integrates internal information (sampling distribution of training data) and external information (prior knowledge or background information) within a logically consistent probabilistic framework. In addition, the posterior distribution and the posterior predictive (marginal) distribution derived from the Bayes’ rule summarize the entire information required for inference and prediction, respectively. In this work, we investigate the Bayesian framework of the Tensor Network (BTN) from two perspectives. First, for the inference step, we propose an effective initialization scheme for the BTN parameters, which significantly improves the robustness and efficiency of the training procedure and leads to improved test performance. Second, in the prediction stage, we consider the Gaussian prior of the weights in BTN and predict the labels of the new observations using the posterior predictive (marginal) distribution. We derive the approximation of the posterior predictive distribution using the Laplace approximation where the out-product approximation of the Hessian matrix of the posterior distribution is applied. In the numerical experiments, we evaluate the performance of our initialization strategy and demonstrate its advantages by comparing it with other popular initialization methods including He initialization, Xavier initialization and Haliassos initialization methods on California House Price (CHP), Breast Cancer (BC), Phishing Website (PW), MNIST, Fashion-MNIST (FMNIST), SVHN and CIFAR-10 datasets. We further examine the characteristics of BTN by showing its parameters and decision boundaries trained on the two-dimensional synthetic dataset. The performance of BTN is thoroughly analyzed from two perspectives: the generalization and calibration. Through the experiments on a variety of aforementioned datasets, we demonstrate the superior performance of BTN both in generalization and calibration compared to regular TN based learning models. This demonstrates the potential of the Bayesian formalism in the development of more powerful TN-based learning models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132961"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-text driven style randomization for domain generalized semantic segmentation 面向领域广义语义分割的图像文本驱动风格随机化
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132953
Junho Lee , Jisu Yoon , Jisong Kim , Jun Won Choi
Semantic segmentation models trained on source domains often fail to generalize to unseen domains due to domain shifts caused by varying environmental conditions. While existing approaches rely solely on text prompts for domain randomization, their generated styles often deviate from real-world distributions. To address this limitation, we propose a novel two-stage framework for Domain Generalization in Semantic Segmentation (DGSS). First, we introduce Image-Prompt-driven Instance Normalization (I-PIN), which leverages both style images and text prompts to optimize style parameters, achieving more accurate style representations compared to text-only approaches. Second, we present Dual-Path Style-Invariant Feature Learning (DSFL) which employs inter-style and intra-style consistency losses, ensuring consistent predictions across different styles while promoting feature alignment within semantic classes. Extensive experiments demonstrate that our approach consistently outperforms existing state-of-the-art methods across multiple challenging domains, effectively addressing the domain shift problem in semantic segmentation.
在源域上训练的语义分割模型,由于环境条件变化引起的域转移,往往不能泛化到未知域。虽然现有的方法完全依赖于文本提示来进行领域随机化,但它们生成的样式经常偏离现实世界的分布。为了解决这一限制,我们提出了一种新的两阶段语义分割领域泛化框架。首先,我们介绍了图像提示驱动的实例规范化(I-PIN),它利用样式图像和文本提示来优化样式参数,与纯文本方法相比,实现更准确的样式表示。其次,我们提出了双路径风格不变特征学习(DSFL),它采用风格间和风格内一致性损失,确保不同风格之间的一致预测,同时促进语义类内的特征对齐。大量的实验表明,我们的方法在多个具有挑战性的领域中始终优于现有的最先进的方法,有效地解决了语义分割中的领域转移问题。
{"title":"Image-text driven style randomization for domain generalized semantic segmentation","authors":"Junho Lee ,&nbsp;Jisu Yoon ,&nbsp;Jisong Kim ,&nbsp;Jun Won Choi","doi":"10.1016/j.neucom.2026.132953","DOIUrl":"10.1016/j.neucom.2026.132953","url":null,"abstract":"<div><div>Semantic segmentation models trained on source domains often fail to generalize to unseen domains due to domain shifts caused by varying environmental conditions. While existing approaches rely solely on text prompts for domain randomization, their generated styles often deviate from real-world distributions. To address this limitation, we propose a novel two-stage framework for <em>Domain Generalization in Semantic Segmentation</em> (DGSS). First, we introduce <em>Image-Prompt-driven Instance Normalization</em> (I-PIN), which leverages both style images and text prompts to optimize style parameters, achieving more accurate style representations compared to text-only approaches. Second, we present <em>Dual-Path Style-Invariant Feature Learning</em> (DSFL) which employs inter-style and intra-style consistency losses, ensuring consistent predictions across different styles while promoting feature alignment within semantic classes. Extensive experiments demonstrate that our approach consistently outperforms existing state-of-the-art methods across multiple challenging domains, effectively addressing the domain shift problem in semantic segmentation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 132953"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STAR-SNN: A spatio-temporal adaptive recurrent spiking neural network with separated propagation surrogate gradient for hardware efficient real-time learning STAR-SNN:一种用于硬件高效实时学习的具有分离传播代理梯度的时空自适应循环尖峰神经网络
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132968
Hojae Choi , Jaewook Kim , Jongkil Park , Seongsik Park , Hyun Jae Jang , Seung Hwan Lee , Byeong-Kwon Ju , YeonJoo Jeong
Backpropagation Through Time (BPTT) trains Recurrent Spiking Neural Networks (R-SNNs) effectively but incurs high computational and memory costs, limiting real-time applications. To mitigate resource demands, we adopt truncated BPTT (K=1), reducing memory cost by three orders of magnitude. However, this truncation weakens sequence learning by limiting gradient propagation. To compensate, we introduce the Spatio-temporal Adaptive Recurrent Spiking Neural Network (STAR-SNN), which incorporates adaptive parameters to enhance high-dimensional representations and effectively retain sequence information despite truncation. Additionally, R-SNNs suffer from unstable training due to the entanglement of spike generation and suppression in weight updates. To resolve this, we develop Separated Propagation Surrogate Gradient (SPSG), which decouples these processes by selectively propagating error signals, stabilizing learning and improving convergence. Our approach achieves a 393-fold reduction in MSE loss for chaotic system forecasting and delivers high performance in event-driven DVS-Gesture recognition, establishing a scalable, hardware-efficient framework for real-time neuromorphic computing.
随着时间的反向传播(BPTT)可以有效地训练循环尖峰神经网络(r - snn),但会产生较高的计算和内存成本,限制了实时应用。为了减少资源需求,我们采用截断的BPTT (K=1),将内存成本降低了三个数量级。然而,这种截断通过限制梯度传播削弱了序列学习。为了进行补偿,我们引入了时空自适应循环尖峰神经网络(STAR-SNN),该网络结合了自适应参数来增强高维表示,并在截断的情况下有效地保留序列信息。此外,r - snn由于在权值更新过程中纠缠峰的产生和抑制而导致训练不稳定。为了解决这个问题,我们开发了分离传播代理梯度(SPSG),它通过选择性地传播错误信号、稳定学习和提高收敛性来解耦这些过程。我们的方法将混沌系统预测的MSE损失降低了393倍,并在事件驱动的dvs -手势识别中提供高性能,为实时神经形态计算建立了可扩展的硬件高效框架。
{"title":"STAR-SNN: A spatio-temporal adaptive recurrent spiking neural network with separated propagation surrogate gradient for hardware efficient real-time learning","authors":"Hojae Choi ,&nbsp;Jaewook Kim ,&nbsp;Jongkil Park ,&nbsp;Seongsik Park ,&nbsp;Hyun Jae Jang ,&nbsp;Seung Hwan Lee ,&nbsp;Byeong-Kwon Ju ,&nbsp;YeonJoo Jeong","doi":"10.1016/j.neucom.2026.132968","DOIUrl":"10.1016/j.neucom.2026.132968","url":null,"abstract":"<div><div>Backpropagation Through Time (BPTT) trains Recurrent Spiking Neural Networks (R-SNNs) effectively but incurs high computational and memory costs, limiting real-time applications. To mitigate resource demands, we adopt truncated BPTT (K=1), reducing memory cost by three orders of magnitude. However, this truncation weakens sequence learning by limiting gradient propagation. To compensate, we introduce the Spatio-temporal Adaptive Recurrent Spiking Neural Network (STAR-SNN), which incorporates adaptive parameters to enhance high-dimensional representations and effectively retain sequence information despite truncation. Additionally, R-SNNs suffer from unstable training due to the entanglement of spike generation and suppression in weight updates. To resolve this, we develop Separated Propagation Surrogate Gradient (SPSG), which decouples these processes by selectively propagating error signals, stabilizing learning and improving convergence. Our approach achieves a 393-fold reduction in MSE loss for chaotic system forecasting and delivers high performance in event-driven DVS-Gesture recognition, establishing a scalable, hardware-efficient framework for real-time neuromorphic computing.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132968"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven robust state estimation based on EK-SVSF 基于EK-SVSF的数据驱动鲁棒状态估计
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.neucom.2026.132869
Meng Liu , Xiao He
This paper introduces a novel extension to the Extended Kalman-based Smooth Variable Structure Filter (EK-SVSF), a hybrid state estimation framework that integrates the Extended Kalman Filter (EKF) with the Smooth Variable Structure Filter (SVSF). Tailored for nonlinear systems subject to model uncertainties and external disturbances, EK-SVSF enhances estimation accuracy by leveraging the complementary strengths of its constituent filters. Nonetheless, the efficacy of EK-SVSF hinges critically on the selection of an appropriate width for the smoothing boundary layer (SBL); suboptimal values—either excessively large or small—can substantially impair filtering performance. Compounding this issue, inherent model uncertainties render the determination of an optimal SBL a formidable and enduring challenge. To mitigate this, we propose a data-driven methodology that autonomously extracts salient features from the smoothing boundary function, thereby resolving the parameter tuning dilemma under model uncertainty. Furthermore, to refine the associated multi-loss weighted aggregation, we incorporate an adaptive weighting scheme based on the coefficient of variation, enabling dynamic optimization. Empirical evaluations demonstrate that the proposed approach yields robust and resilient state estimation outcomes, even in the presence of significant model discrepancies.
本文对基于扩展卡尔曼的光滑变结构滤波器(EK-SVSF)进行了扩展,提出了一种将扩展卡尔曼滤波器(EKF)与光滑变结构滤波器(SVSF)相结合的混合状态估计框架。针对受模型不确定性和外部干扰影响的非线性系统,EK-SVSF通过利用其组成滤波器的互补优势来提高估计精度。尽管如此,EK-SVSF的有效性关键取决于平滑边界层(SBL)的适当宽度的选择;次优值(无论是过大还是过小)都会严重损害过滤性能。使这一问题复杂化的是,固有的模式不确定性使最佳SBL的确定成为一项艰巨而持久的挑战。为了缓解这一问题,我们提出了一种数据驱动的方法,该方法可以自动从平滑边界函数中提取显著特征,从而解决模型不确定性下的参数调整困境。此外,为了改进相关的多损失加权聚合,我们结合了一种基于变异系数的自适应加权方案,实现了动态优化。实证评估表明,即使在存在显著模型差异的情况下,所提出的方法也能产生稳健和有弹性的状态估计结果。
{"title":"Data-driven robust state estimation based on EK-SVSF","authors":"Meng Liu ,&nbsp;Xiao He","doi":"10.1016/j.neucom.2026.132869","DOIUrl":"10.1016/j.neucom.2026.132869","url":null,"abstract":"<div><div>This paper introduces a novel extension to the Extended Kalman-based Smooth Variable Structure Filter (EK-SVSF), a hybrid state estimation framework that integrates the Extended Kalman Filter (EKF) with the Smooth Variable Structure Filter (SVSF). Tailored for nonlinear systems subject to model uncertainties and external disturbances, EK-SVSF enhances estimation accuracy by leveraging the complementary strengths of its constituent filters. Nonetheless, the efficacy of EK-SVSF hinges critically on the selection of an appropriate width for the smoothing boundary layer (SBL); suboptimal values—either excessively large or small—can substantially impair filtering performance. Compounding this issue, inherent model uncertainties render the determination of an optimal SBL a formidable and enduring challenge. To mitigate this, we propose a data-driven methodology that autonomously extracts salient features from the smoothing boundary function, thereby resolving the parameter tuning dilemma under model uncertainty. Furthermore, to refine the associated multi-loss weighted aggregation, we incorporate an adaptive weighting scheme based on the coefficient of variation, enabling dynamic optimization. Empirical evaluations demonstrate that the proposed approach yields robust and resilient state estimation outcomes, even in the presence of significant model discrepancies.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132869"},"PeriodicalIF":6.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1