首页 > 最新文献

Neural Networks最新文献

英文 中文
Dynamic bidirectional data recomposition for efficient road garbage segmentation in semi-supervised learning 基于半监督学习的道路垃圾高效分割的动态双向数据重组。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108655
Suheng Peng , Jiacai Liao , Libo Cao
Deep neural networks excel in road garbage segmentation but require costly pixel-level annotations. Balancing accuracy and annotation costs is a key bottleneck in urban garbage management. Semi-supervised learning (SSL) reduces the dependence on annotations by utilizing large amounts of unlabeled data. However, existing methods face a key challenge: under extreme annotation imbalance, the scarce labeled data often lacks diversity. This leads to repeated reuse during training, preventing full information exploitation and causing model performance stagnation. Specifically, we introduce the Dynamic Bidirectional Data Recomposition (DBDR) mechanism, which dynamically adjusts the bidirectional information interaction between labeled and unlabeled data to solve the problem of representation stagnation. Early training: The labeled data is integrated into the unlabeled data stream according to confidence levels, guiding the model to prioritize capturing and stabilizing basic semantic prototypes. Mid-training: A dynamic memory queue is constructed to quantify the evolution of model confidence states over time. We use dynamic thresholds and dual validation to trigger a reverse flow of knowledge from unlabeled to labeled supervision. This breaks local optima in the encoder and reshapes the semantic decision boundaries. DBDR can be integrated into any current mainstream SSL framework. On a real-world road garbage dataset, DBDR delivers a significant performance boost over all five state-of-the-art baseline models. Ablation experiments validate its key improvements in the segmentation of confusing targets (e.g., plastic, paper). This research provides an economically feasible solution for future smart city waste management technologies.
深度神经网络擅长道路垃圾分割,但需要昂贵的像素级标注。平衡准确性和标注成本是城市垃圾管理的关键瓶颈。半监督学习(SSL)通过利用大量未标记的数据来减少对注释的依赖。然而,现有方法面临着一个关键挑战:在标注极度不平衡的情况下,稀缺的标注数据往往缺乏多样性。这将导致训练期间的重复重用,从而阻止充分利用信息并导致模型性能停滞。具体来说,我们引入了动态双向数据重组(DBDR)机制,动态调整标记和未标记数据之间的双向信息交互,以解决表示停滞的问题。早期训练:根据置信度将标记的数据整合到未标记的数据流中,引导模型优先捕获和稳定基本语义原型。训练中期:构建一个动态记忆队列来量化模型置信状态随时间的演变。我们使用动态阈值和双重验证来触发从未标记监督到标记监督的反向知识流。这打破了编码器中的局部最优,并重塑了语义决策边界。DBDR可以集成到任何当前主流的SSL框架中。在现实世界的道路垃圾数据集上,DBDR比所有五个最先进的基线模型都提供了显着的性能提升。消融实验验证了其在分割混淆目标(如塑料、纸张)方面的关键改进。本研究为未来智慧城市废弃物管理技术提供了经济可行的解决方案。
{"title":"Dynamic bidirectional data recomposition for efficient road garbage segmentation in semi-supervised learning","authors":"Suheng Peng ,&nbsp;Jiacai Liao ,&nbsp;Libo Cao","doi":"10.1016/j.neunet.2026.108655","DOIUrl":"10.1016/j.neunet.2026.108655","url":null,"abstract":"<div><div>Deep neural networks excel in road garbage segmentation but require costly pixel-level annotations. Balancing accuracy and annotation costs is a key bottleneck in urban garbage management. Semi-supervised learning (SSL) reduces the dependence on annotations by utilizing large amounts of unlabeled data. However, existing methods face a key challenge: under extreme annotation imbalance, the scarce labeled data often lacks diversity. This leads to repeated reuse during training, preventing full information exploitation and causing model performance stagnation. Specifically, we introduce the Dynamic Bidirectional Data Recomposition (DBDR) mechanism, which dynamically adjusts the bidirectional information interaction between labeled and unlabeled data to solve the problem of representation stagnation. Early training: The labeled data is integrated into the unlabeled data stream according to confidence levels, guiding the model to prioritize capturing and stabilizing basic semantic prototypes. Mid-training: A dynamic memory queue is constructed to quantify the evolution of model confidence states over time. We use dynamic thresholds and dual validation to trigger a reverse flow of knowledge from unlabeled to labeled supervision. This breaks local optima in the encoder and reshapes the semantic decision boundaries. DBDR can be integrated into any current mainstream SSL framework. On a real-world road garbage dataset, DBDR delivers a significant performance boost over all five state-of-the-art baseline models. Ablation experiments validate its key improvements in the segmentation of confusing targets (e.g., plastic, paper). This research provides an economically feasible solution for future smart city waste management technologies.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108655"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency HP-GAN:利用假双胞胎和鉴别器一致性利用预训练网络进行GAN改进。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108666
Geonhui Son , Jeong Ryong Lee , Dosik Hwang
Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fréchet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: https://github.com/higun2/HP-GAN.
生成对抗网络(GANs)在提高图像合成质量方面取得了重大进展。最近的方法经常利用预训练的网络来计算感知损失或利用预训练的特征空间。在本文中,我们通过结合创新的自监督学习技术和在GAN训练期间强制判别器之间的一致性来扩展预训练网络的能力。我们提出的方法,称为HP-GAN,通过两个主要策略:FakeTwins和鉴别器一致性,有效地利用神经网络先验。FakeTwins利用预训练网络作为编码器来计算自监督损失,并通过生成的图像来训练生成器,从而能够生成更多样化和高质量的图像。此外,我们引入了一种判别器之间的一致性机制,用于评估从卷积神经网络(CNN)和视觉变形(ViT)特征网络中提取的特征映射。鉴别器一致性促进鉴别器之间的连贯学习,并通过调整鉴别器对图像质量的评估来增强训练的鲁棒性。我们对17个数据集进行了广泛的评估,包括大数据、小数据和有限数据的场景,并涵盖了各种图像域,表明HP-GAN在fr起始距离(FID)方面始终优于当前最先进的方法,在图像多样性和质量方面取得了显着改善。代码可从https://github.com/higun2/HP-GAN获得。
{"title":"HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency","authors":"Geonhui Son ,&nbsp;Jeong Ryong Lee ,&nbsp;Dosik Hwang","doi":"10.1016/j.neunet.2026.108666","DOIUrl":"10.1016/j.neunet.2026.108666","url":null,"abstract":"<div><div>Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fréchet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: <span><span>https://github.com/higun2/HP-GAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108666"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing adversarial transferability via curvature-aware penalization 通过曲率感知惩罚增强对抗可转移性。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108665
Cheng Peng , Zeze Tao , Junyu Liu , Jinjia Peng
Transfer-based attack generates adversarial examples on a surrogate model and exploits the intriguing property of transferability to deceive other unknown models, making it practical for real-world scenarios. Recent research has sought to optimize the loss surface by minimizing its maximum loss, which in practice cannot be computed exactly and is instead approximated through gradient ascent. However, the loss landscape becomes increasingly non-linear during later attack stages, making the gradient ascent less effective. To address this challenge, we propose a novel attack called Curvature-Aware Penalization (CAP), which incorporates the gradient norm and the curvature-aware term as regularization terms to maintain the flatness of the loss surface. Since directly computing the Hessian matrix is computationally expensive, we utilize the finite difference method to reduce computational complexity. Specifically, we randomly sample an example from the neighborhood and interpolate gradients at three neighboring points along the example’s gradient direction to approximate the Hessian. Additionally, to reduce the variance caused by random sampling, the combined gradients are averaged over multiple stochastic samples. Comprehensive experimental results demonstrate that our CAP can not only craft adversarial examples with enhanced transferability across various network architectures but also exhibit stronger resistance to state-of-the-art adversarial defense methods. Code is available at https://github.com/PC614/CAP.
基于转移的攻击在代理模型上生成对抗性示例,并利用可转移性的有趣属性来欺骗其他未知模型,使其在现实场景中实用。最近的研究试图通过最小化其最大损失来优化损失面,而在实践中无法精确计算,而是通过梯度上升来近似计算。然而,在后期的攻击阶段,损失情况变得越来越非线性,使得梯度上升变得不那么有效。为了解决这一挑战,我们提出了一种新的攻击方法,称为曲率感知惩罚(CAP),它将梯度范数和曲率感知项作为正则化项来保持损失表面的平坦性。由于直接计算Hessian矩阵计算量大,我们采用有限差分法来降低计算复杂度。具体来说,我们从邻域中随机抽取一个样本,并沿着样本的梯度方向在三个相邻点上插值梯度来近似Hessian。此外,为了减少随机抽样引起的方差,将组合梯度在多个随机样本上平均。综合实验结果表明,我们的CAP不仅可以在各种网络架构之间制作具有增强可转移性的对抗性示例,而且还可以对最先进的对抗性防御方法表现出更强的抵抗力。代码可从https://github.com/PC614/CAP获得。
{"title":"Enhancing adversarial transferability via curvature-aware penalization","authors":"Cheng Peng ,&nbsp;Zeze Tao ,&nbsp;Junyu Liu ,&nbsp;Jinjia Peng","doi":"10.1016/j.neunet.2026.108665","DOIUrl":"10.1016/j.neunet.2026.108665","url":null,"abstract":"<div><div>Transfer-based attack generates adversarial examples on a surrogate model and exploits the intriguing property of transferability to deceive other unknown models, making it practical for real-world scenarios. Recent research has sought to optimize the loss surface by minimizing its maximum loss, which in practice cannot be computed exactly and is instead approximated through gradient ascent. However, the loss landscape becomes increasingly non-linear during later attack stages, making the gradient ascent less effective. To address this challenge, we propose a novel attack called Curvature-Aware Penalization (CAP), which incorporates the gradient norm and the curvature-aware term as regularization terms to maintain the flatness of the loss surface. Since directly computing the Hessian matrix is computationally expensive, we utilize the finite difference method to reduce computational complexity. Specifically, we randomly sample an example from the neighborhood and interpolate gradients at three neighboring points along the example’s gradient direction to approximate the Hessian. Additionally, to reduce the variance caused by random sampling, the combined gradients are averaged over multiple stochastic samples. Comprehensive experimental results demonstrate that our CAP can not only craft adversarial examples with enhanced transferability across various network architectures but also exhibit stronger resistance to state-of-the-art adversarial defense methods. Code is available at <span><span>https://github.com/PC614/CAP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108665"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An interactive axial feature selection network for medical image classification 用于医学图像分类的交互式轴向特征选择网络。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108661
Shuai Pang, Chunhua Hu, Juan Zhao, Haifang Yu
To address the differences and correlations between features, as well as to fully utilize the importance of salient semantics in medical image classification tasks, this paper proposes an Interactive Axial Feature Selection Network (IAFSNet), aimed at improving feature representation, effectively filtering noise during classification, thereby enhancing classification performance. The paper introduces a newly designed Feature Interaction Module (FIM), which learns spatial differences between various features and enhances the interdependence and complementarity between local spatial features and global contextual semantics. Additionally, the paper implements a novel Axial Feature Selection Module (AFSM), which filters salient feature semantics from three perspectives: horizontal, vertical, and spatial. By adjusting thresholds, salient features are emphasized while irrelevant noise is eliminated, allowing these key features to cross-aggregate layer by layer and establish interactions among them, ultimately improving classification accuracy. Experimental results on four benchmark datasets demonstrate that the proposed IAFSNet exhibits excellent classification performance and robustness, significantly outperforming many existing classification methods.
为了解决特征之间的差异和相关性,并充分利用显著语义在医学图像分类任务中的重要性,本文提出了一种交互式轴向特征选择网络(IAFSNet),旨在改进特征表示,有效过滤分类过程中的噪声,从而提高分类性能。本文介绍了一种新的特征交互模块(FIM),该模块可以学习各种特征之间的空间差异,增强局部空间特征与全局上下文语义之间的相互依存和互补。此外,本文还实现了一种新的轴向特征选择模块(AFSM),该模块从水平、垂直和空间三个角度过滤显著特征语义。通过调整阈值,突出突出的特征,消除不相关的噪声,使这些关键特征逐层交叉聚集,建立相互作用,最终提高分类精度。在4个基准数据集上的实验结果表明,所提出的IAFSNet具有优异的分类性能和鲁棒性,显著优于现有的许多分类方法。
{"title":"An interactive axial feature selection network for medical image classification","authors":"Shuai Pang,&nbsp;Chunhua Hu,&nbsp;Juan Zhao,&nbsp;Haifang Yu","doi":"10.1016/j.neunet.2026.108661","DOIUrl":"10.1016/j.neunet.2026.108661","url":null,"abstract":"<div><div>To address the differences and correlations between features, as well as to fully utilize the importance of salient semantics in medical image classification tasks, this paper proposes an Interactive Axial Feature Selection Network (IAFSNet), aimed at improving feature representation, effectively filtering noise during classification, thereby enhancing classification performance. The paper introduces a newly designed Feature Interaction Module (FIM), which learns spatial differences between various features and enhances the interdependence and complementarity between local spatial features and global contextual semantics. Additionally, the paper implements a novel Axial Feature Selection Module (AFSM), which filters salient feature semantics from three perspectives: horizontal, vertical, and spatial. By adjusting thresholds, salient features are emphasized while irrelevant noise is eliminated, allowing these key features to cross-aggregate layer by layer and establish interactions among them, ultimately improving classification accuracy. Experimental results on four benchmark datasets demonstrate that the proposed IAFSNet exhibits excellent classification performance and robustness, significantly outperforming many existing classification methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108661"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning discriminative prototypes: Adaptive relation-aware refinement and patch-level contextual feature reweighting for few-shot classification 学习判别原型:自适应关系感知改进和补丁级上下文特征重加权。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108649
Mengjuan Jiang, Fanzhang Li
Few-shot learning (FSL) aims to achieve efficient classification with limited labeled samples, providing an important research paradigm for addressing the model generalization issue in data-scarce scenarios. In the metric-based FSL framework, class prototypes serve as the core transferable representation of classes, and their discriminative power directly impacts the model’s classification performance. However, existing methods face two major bottlenecks: first, traditional feature selection mechanisms use static modeling approaches that are susceptible to background noise and struggle to capture dynamic relationships between classes; second, due to limitations in the quantity and quality of labeled samples, prototype representations based on global features lack fine-grained expression of local discriminative features, limiting the prototype’s representational power. To overcome these limitations, we propose a novel framework: Learning Discriminative Prototypes (LDP). LDP includes two modules: (1) Adaptive relation-aware refinement, which dynamically models the relationships between class prototypes, highlighting the key features of each class and effectively enhancing the robustness of feature representations; (2) Patch-level contextual feature reweighting, which performs a reweighting operation on the samples through patch-level feature interactions thereby obtaining a more discriminative prototype. Experimental results demonstrate that LDP shows strong competitiveness on five datasets covering both standard and cross-domain datasets, validating its effectiveness in FSL tasks. For example, in the 1-shot setting on miniImageNet and tieredImageNet, LDP achieves over 12% accuracy improvement compared with the baseline methods; on the cross-domain dataset CUB200, the improvement reaches 6.45% in the 1-shot case. Our code is available on GitHub at https://github.com/fewshot-learner/LDP.
少射学习(Few-shot learning, FSL)旨在利用有限的标记样本实现高效分类,为解决数据稀缺场景下的模型泛化问题提供了重要的研究范式。在基于度量的FSL框架中,类原型作为类的核心可转移表征,其判别能力直接影响模型的分类性能。然而,现有的方法面临两大瓶颈:首先,传统的特征选择机制使用静态建模方法,容易受到背景噪声的影响,难以捕捉类之间的动态关系;其次,由于标注样本数量和质量的限制,基于全局特征的原型表示缺乏对局部判别特征的细粒度表达,限制了原型的表示能力。为了克服这些限制,我们提出了一个新的框架:学习判别原型(LDP)。LDP包括两个模块:(1)自适应关系感知细化,对类原型之间的关系进行动态建模,突出每个类的关键特征,有效增强特征表示的鲁棒性;(2)斑块级上下文特征重加权,通过斑块级特征交互对样本进行重加权操作,从而获得更具判别性的原型。实验结果表明,LDP在涵盖标准和跨域数据集的5个数据集上表现出较强的竞争力,验证了其在FSL任务中的有效性。例如,在miniImageNet和tieredImageNet的1镜头设置中,LDP与基线方法相比,精度提高了12%以上;在跨域数据集CUB200上,在1次射击的情况下,改进率达到6.45%。我们的代码可以在GitHub上获得https://github.com/fewshot-learner/LDP。
{"title":"Learning discriminative prototypes: Adaptive relation-aware refinement and patch-level contextual feature reweighting for few-shot classification","authors":"Mengjuan Jiang,&nbsp;Fanzhang Li","doi":"10.1016/j.neunet.2026.108649","DOIUrl":"10.1016/j.neunet.2026.108649","url":null,"abstract":"<div><div>Few-shot learning (FSL) aims to achieve efficient classification with limited labeled samples, providing an important research paradigm for addressing the model generalization issue in data-scarce scenarios. In the metric-based FSL framework, class prototypes serve as the core transferable representation of classes, and their discriminative power directly impacts the model’s classification performance. However, existing methods face two major bottlenecks: first, traditional feature selection mechanisms use static modeling approaches that are susceptible to background noise and struggle to capture dynamic relationships between classes; second, due to limitations in the quantity and quality of labeled samples, prototype representations based on global features lack fine-grained expression of local discriminative features, limiting the prototype’s representational power. To overcome these limitations, we propose a novel framework: Learning Discriminative Prototypes (LDP). LDP includes two modules: (1) Adaptive relation-aware refinement, which dynamically models the relationships between class prototypes, highlighting the key features of each class and effectively enhancing the robustness of feature representations; (2) Patch-level contextual feature reweighting, which performs a reweighting operation on the samples through patch-level feature interactions thereby obtaining a more discriminative prototype. Experimental results demonstrate that LDP shows strong competitiveness on five datasets covering both standard and cross-domain datasets, validating its effectiveness in FSL tasks. For example, in the 1-shot setting on miniImageNet and tieredImageNet, LDP achieves over 12% accuracy improvement compared with the baseline methods; on the cross-domain dataset CUB200, the improvement reaches 6.45% in the 1-shot case. Our code is available on GitHub at <span><span>https://github.com/fewshot-learner/LDP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108649"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autorep: Automatic network search with structured reparameterized based linear operation expansion and gradient proxy guided reduction Autorep:基于结构化重参数化的线性操作展开和梯度代理引导约简的自动网络搜索
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108657
Guhao Qiu , Ruoxin Chen , Zhihua Chen , Lei Dai , Ping Li , Bin Sheng
Convolution neural network and Vision Transformer have achieved large success in various computer vision tasks. However, the huge computation cost hinders its application and it is hard to design efficient methods to obtain lightweight architectures with both mannually designed strategies and automatically searching methods. In this paper, we focus on introducing the specific structural reparameterization strategy in SuperNet training to improve the performance of one-shot based neural architecture search algorithm. During the SuperNet training process, each candidate operation is expanded by a series of equivalent operation branches to fully utilize the representation potential. To alleviate the training difficulty and avoid bringing too much computation costs, the operation reduction strategy and prior sampling strategy are used after validating the sampled subnetworks. The operation reduction strategy is to remove the low-effect extended linear layer. The reduction step needs to firstly select the candidate operation based on SynFlow proxy and then select the extended linear layer from the selected operation based on the accuracy difference before and after removal.
卷积神经网络和视觉变压器在各种计算机视觉任务中取得了很大的成功。然而,巨大的计算成本阻碍了它的应用,并且很难设计出既采用人工设计策略又采用自动搜索方法的高效方法来获得轻量级体系结构。在本文中,我们重点介绍了SuperNet训练中特定的结构重参数化策略,以提高基于单次神经结构搜索算法的性能。在超级网络训练过程中,每个候选操作通过一系列等价的操作分支进行扩展,以充分利用表征潜力。为了降低训练难度和避免带来过多的计算成本,在对采样子网进行验证后,采用了运算缩减策略和先验采样策略。操作缩减策略是去除低效果的扩展线性层。约简步骤需要首先基于SynFlow代理选择候选操作,然后根据去除前后的精度差异从所选操作中选择扩展的线性层。
{"title":"Autorep: Automatic network search with structured reparameterized based linear operation expansion and gradient proxy guided reduction","authors":"Guhao Qiu ,&nbsp;Ruoxin Chen ,&nbsp;Zhihua Chen ,&nbsp;Lei Dai ,&nbsp;Ping Li ,&nbsp;Bin Sheng","doi":"10.1016/j.neunet.2026.108657","DOIUrl":"10.1016/j.neunet.2026.108657","url":null,"abstract":"<div><div>Convolution neural network and Vision Transformer have achieved large success in various computer vision tasks. However, the huge computation cost hinders its application and it is hard to design efficient methods to obtain lightweight architectures with both mannually designed strategies and automatically searching methods. In this paper, we focus on introducing the specific structural reparameterization strategy in SuperNet training to improve the performance of one-shot based neural architecture search algorithm. During the SuperNet training process, each candidate operation is expanded by a series of equivalent operation branches to fully utilize the representation potential. To alleviate the training difficulty and avoid bringing too much computation costs, the operation reduction strategy and prior sampling strategy are used after validating the sampled subnetworks. The operation reduction strategy is to remove the low-effect extended linear layer. The reduction step needs to firstly select the candidate operation based on SynFlow proxy and then select the extended linear layer from the selected operation based on the accuracy difference before and after removal.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108657"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical ranking in hyperbolic space: A novel approach to metric learning 双曲空间中的层次排序:度量学习的新方法
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108658
Shuda Zhang , Huiying Li
The integration of deep metric learning with hyperbolic geometry has shown significant potential for capturing complex hierarchical relationships. However, existing clustering-based methods struggle to fully leverage the properties of hyperbolic space, particularly due to the challenge of optimizing both cluster centers and distance metrics in exponentially expanding spaces without true hierarchical labels. Additionally, the computational complexity of Riemannian operations makes maintaining hierarchical structures costly, especially for large datasets. To address these challenges, we propose a novel hierarchical ranking framework that utilizes latent hierarchical information without relying on explicit clustering. This framework introduces the Hierarchical Ranking Generation (HRG) strategy and Hierarchical Ranking Loss (HRL). HRG generates ranking labels based on the semantic relationships between classes within an implicit global hierarchy, while HRL optimizes these rankings across multiple hierarchical levels, enabling the model to learn richer, more nuanced representations. Our approach significantly improves performance, outperforming the state-of-the-art by 2.4% on CUB-200-2011 and 1.6% on Cars-196 at Recall@1.
深度度量学习与双曲几何的集成显示了捕获复杂层次关系的巨大潜力。然而,现有的基于聚类的方法很难充分利用双曲空间的特性,特别是由于在没有真正层次标签的指数扩展空间中优化聚类中心和距离度量的挑战。此外,黎曼运算的计算复杂性使得维护分层结构的成本很高,特别是对于大型数据集。为了解决这些挑战,我们提出了一种新的层次排序框架,该框架利用潜在的层次信息而不依赖于显式聚类。该框架引入了分级排名生成(HRG)策略和分级排名损失(HRL)策略。HRG基于隐式全局层次结构中类之间的语义关系生成排名标签,而HRL跨多个层次结构级别优化这些排名,使模型能够学习更丰富、更细致的表示。我们的方法显著提高了性能,在CUB-200-2011和Recall@1上的Cars-196上的表现分别比最先进的方法高出2.4%和1.6%。
{"title":"Hierarchical ranking in hyperbolic space: A novel approach to metric learning","authors":"Shuda Zhang ,&nbsp;Huiying Li","doi":"10.1016/j.neunet.2026.108658","DOIUrl":"10.1016/j.neunet.2026.108658","url":null,"abstract":"<div><div>The integration of deep metric learning with hyperbolic geometry has shown significant potential for capturing complex hierarchical relationships. However, existing clustering-based methods struggle to fully leverage the properties of hyperbolic space, particularly due to the challenge of optimizing both cluster centers and distance metrics in exponentially expanding spaces without true hierarchical labels. Additionally, the computational complexity of Riemannian operations makes maintaining hierarchical structures costly, especially for large datasets. To address these challenges, we propose a novel hierarchical ranking framework that utilizes latent hierarchical information without relying on explicit clustering. This framework introduces the Hierarchical Ranking Generation (HRG) strategy and Hierarchical Ranking Loss (HRL). HRG generates ranking labels based on the semantic relationships between classes within an implicit global hierarchy, while HRL optimizes these rankings across multiple hierarchical levels, enabling the model to learn richer, more nuanced representations. Our approach significantly improves performance, outperforming the state-of-the-art by 2.4% on CUB-200-2011 and 1.6% on Cars-196 at Recall@1.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108658"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LADA: A label-aware framework for cross-domain sentiment classification. LADA:跨领域情感分类的标签感知框架。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108659
Yu Tong, Ying Chen, Xupeng Mai, Lisheng Wen, Sentao Chen

Existing domain adaptation approaches that address the cross-domain sentiment analysis task can be concluded into two main lines: (i) aligning the distributions across different domains using various distance metrics. (ii) leveraging the generative-adversarial mechanism. Both methods aim to generate domain-invariant features. However, a shared challenge is evident: solely minimizing the distance of features X between the source and target domains results in a deficiency in establishing the relationship between features and labels. Moreover, the generative-adversarial approach is constrained by the inherent drawback of the generative adversarial mechanism, where the generator may generate irrelevant features as long as it can deceive the discriminator. In response to the aforementioned challenges, we introduce a Label-Aware Domain Adaptation (LADA) framework. LADA utilizes the joint probability distribution to preserve the relationship between features and labels. LADA achieves domain-invariant feature generation with label information by aligning the joint feature distributions of the source and target domains. Comprehensive experiments validate the cross-domain effectiveness of LADA, demonstrating state-of-the-art performance in the benchmark tests of sentiment analysis.

解决跨领域情感分析任务的现有领域自适应方法可以归纳为两条主线:(i)使用各种距离度量来对齐不同领域的分布。(ii)利用生成对抗机制。这两种方法都旨在生成域不变特征。然而,一个共同的挑战是显而易见的:仅仅最小化源域和目标域之间的特征X的距离导致在建立特征和标签之间的关系方面存在缺陷。此外,生成对抗方法受到生成对抗机制固有缺陷的限制,即只要生成器能够欺骗鉴别器,它就可以生成不相关的特征。为了应对上述挑战,我们引入了一个标签感知领域自适应(LADA)框架。LADA利用联合概率分布来保持特征和标签之间的关系。LADA通过对齐源域和目标域的联合特征分布,实现具有标签信息的域不变特征生成。综合实验验证了LADA的跨域有效性,在情感分析的基准测试中展示了最先进的性能。
{"title":"LADA: A label-aware framework for cross-domain sentiment classification.","authors":"Yu Tong, Ying Chen, Xupeng Mai, Lisheng Wen, Sentao Chen","doi":"10.1016/j.neunet.2026.108659","DOIUrl":"https://doi.org/10.1016/j.neunet.2026.108659","url":null,"abstract":"<p><p>Existing domain adaptation approaches that address the cross-domain sentiment analysis task can be concluded into two main lines: (i) aligning the distributions across different domains using various distance metrics. (ii) leveraging the generative-adversarial mechanism. Both methods aim to generate domain-invariant features. However, a shared challenge is evident: solely minimizing the distance of features X between the source and target domains results in a deficiency in establishing the relationship between features and labels. Moreover, the generative-adversarial approach is constrained by the inherent drawback of the generative adversarial mechanism, where the generator may generate irrelevant features as long as it can deceive the discriminator. In response to the aforementioned challenges, we introduce a Label-Aware Domain Adaptation (LADA) framework. LADA utilizes the joint probability distribution to preserve the relationship between features and labels. LADA achieves domain-invariant feature generation with label information by aligning the joint feature distributions of the source and target domains. Comprehensive experiments validate the cross-domain effectiveness of LADA, demonstrating state-of-the-art performance in the benchmark tests of sentiment analysis.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"108659"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147312037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cortico-cerebellar neural model for task control under incomplete instructions 不完全指令下任务控制的皮质-小脑神经模型。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-27 DOI: 10.1016/j.neunet.2026.108648
Lanyun Cui , Ying Yu , Qingyun Wang , Guanrong Chen
Cerebellar-inspired motor control systems have been widely explored in robotics to achieve biologically plausible movement generation. However, most existing models rely heavily on high-dimensional instruction inputs during training, diverging from the input-efficient control observed in biological systems. In humans, effective motor learning often based on sparse or incomplete external feedback. It is possibly attributed to the interaction between multiple brain regions, especially the cortex and the cerebellum. In this study, we present a hierarchical cortico-cerebellar neural network model that investigates the neural mechanisms enabling motor control under incomplete or low-dimensional instructions. The evaluation results, measured by two complementary levels of evaluation metrics, demonstrate that the cortico-cerebellar model reduces dependency on external instruction without compromising trajectory smoothness. The model features a division of roles: the cortical network handles high-level action selection, while the cerebellar network executes motor commands by torque control, directly operating on a planar arm. Additionally, the cortex exhibits enhanced exploration indirectly driven by the stochastic characteristics of cerebellar torque control. Our results show that cortico-cerebellar coordination can facilitate robust and flexible control even with sparse instruction signals, suggesting a potential mechanism by which biological systems achieve efficient behavior under informational constraints.
以小脑为灵感的运动控制系统在机器人技术中得到了广泛的探索,以实现生物学上合理的运动生成。然而,大多数现有模型在训练过程中严重依赖高维指令输入,偏离了在生物系统中观察到的输入效率控制。在人类中,有效的运动学习通常基于稀疏或不完整的外部反馈。这可能归因于大脑多个区域,特别是皮层和小脑之间的相互作用。在这项研究中,我们提出了一个分层皮质-小脑神经网络模型,该模型研究了在不完整或低维指令下实现运动控制的神经机制。通过两个互补级别的评估指标测量的评估结果表明,皮质-小脑模型在不影响轨迹平滑的情况下减少了对外部指令的依赖。该模型具有角色划分的特点:皮质网络处理高级动作选择,而小脑网络通过扭矩控制执行运动命令,直接在平面手臂上操作。此外,小脑转矩控制的随机特性间接驱动了皮层探索能力的增强。我们的研究结果表明,即使在稀疏的指令信号下,皮质-小脑协调也可以促进鲁棒和灵活的控制,这表明生物系统在信息约束下实现有效行为的潜在机制。
{"title":"A cortico-cerebellar neural model for task control under incomplete instructions","authors":"Lanyun Cui ,&nbsp;Ying Yu ,&nbsp;Qingyun Wang ,&nbsp;Guanrong Chen","doi":"10.1016/j.neunet.2026.108648","DOIUrl":"10.1016/j.neunet.2026.108648","url":null,"abstract":"<div><div>Cerebellar-inspired motor control systems have been widely explored in robotics to achieve biologically plausible movement generation. However, most existing models rely heavily on high-dimensional instruction inputs during training, diverging from the input-efficient control observed in biological systems. In humans, effective motor learning often based on sparse or incomplete external feedback. It is possibly attributed to the interaction between multiple brain regions, especially the cortex and the cerebellum. In this study, we present a hierarchical cortico-cerebellar neural network model that investigates the neural mechanisms enabling motor control under incomplete or low-dimensional instructions. The evaluation results, measured by two complementary levels of evaluation metrics, demonstrate that the cortico-cerebellar model reduces dependency on external instruction without compromising trajectory smoothness. The model features a division of roles: the cortical network handles high-level action selection, while the cerebellar network executes motor commands by torque control, directly operating on a planar arm. Additionally, the cortex exhibits enhanced exploration indirectly driven by the stochastic characteristics of cerebellar torque control. Our results show that cortico-cerebellar coordination can facilitate robust and flexible control even with sparse instruction signals, suggesting a potential mechanism by which biological systems achieve efficient behavior under informational constraints.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108648"},"PeriodicalIF":6.3,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning via conservative agent for environments with random delays 基于保守代理的随机延迟环境强化学习。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-27 DOI: 10.1016/j.neunet.2026.108645
Jongsoo Lee , Jangwon Kim , Jiseok Jeong , Soohee Han
Real-world reinforcement learning applications are often subject to unavoidable delayed feedback from the environment. Under such conditions, the standard state representation may no longer induce Markovian dynamics unless additional information is incorporated at decision time, which introduces significant challenges for both learning and control. While numerous delay-compensation methods have been proposed for environments with constant delays, those with random delays remain largely unexplored due to their inherent variability and unpredictability. In this study, we propose a robust agent for decision-making under bounded random delays, termed the conservative agent. This agent reformulates the random-delay environment into a constant-delay surrogate, which enables any constant-delay method to be directly extended to random-delay environments without modifying their algorithmic structure. Apart from a maximum delay, the conservative agent does not require prior knowledge of the underlying delay distribution and maintains performance invariant to changes in the delay distribution as long as the maximum delay remains unchanged. We present a theoretical analysis of conservative agent and evaluate its performance on diverse continuous control tasks from the MuJoCo benchmarks. Empirical results demonstrate that it significantly outperforms existing baselines in terms of both asymptotic performance and sample efficiency.
现实世界的强化学习应用经常受到来自环境的不可避免的延迟反馈的影响。在这种情况下,除非在决策时加入额外的信息,否则标准状态表示可能不再诱导马尔可夫动态,这对学习和控制都带来了重大挑战。虽然许多延迟补偿方法已被提出用于具有恒定延迟的环境,但那些具有随机延迟的环境由于其固有的可变性和不可预测性而在很大程度上仍未被探索。在本研究中,我们提出了一种鲁棒的决策代理,称为保守代理。该代理将随机延迟环境重新表述为恒定延迟代理,这使得任何恒定延迟方法都可以直接扩展到随机延迟环境,而无需修改其算法结构。除了最大延迟之外,保守代理不需要预先知道潜在的延迟分布,只要最大延迟保持不变,就可以保持延迟分布变化的性能不变。我们提出了保守代理的理论分析,并从MuJoCo基准评估了其在各种连续控制任务上的性能。实证结果表明,在渐近性能和样本效率方面,它明显优于现有的基线。
{"title":"Reinforcement learning via conservative agent for environments with random delays","authors":"Jongsoo Lee ,&nbsp;Jangwon Kim ,&nbsp;Jiseok Jeong ,&nbsp;Soohee Han","doi":"10.1016/j.neunet.2026.108645","DOIUrl":"10.1016/j.neunet.2026.108645","url":null,"abstract":"<div><div>Real-world reinforcement learning applications are often subject to unavoidable delayed feedback from the environment. Under such conditions, the standard state representation may no longer induce Markovian dynamics unless additional information is incorporated at decision time, which introduces significant challenges for both learning and control. While numerous delay-compensation methods have been proposed for environments with constant delays, those with random delays remain largely unexplored due to their inherent variability and unpredictability. In this study, we propose a robust agent for decision-making under bounded random delays, termed the <em>conservative agent</em>. This agent reformulates the random-delay environment into a constant-delay surrogate, which enables any constant-delay method to be directly extended to random-delay environments without modifying their algorithmic structure. Apart from a maximum delay, the conservative agent does not require prior knowledge of the underlying delay distribution and maintains performance invariant to changes in the delay distribution as long as the maximum delay remains unchanged. We present a theoretical analysis of conservative agent and evaluate its performance on diverse continuous control tasks from the MuJoCo benchmarks. Empirical results demonstrate that it significantly outperforms existing baselines in terms of both asymptotic performance and sample efficiency.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108645"},"PeriodicalIF":6.3,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1