首页 > 最新文献

Neural Networks最新文献

英文 中文
Sonar-neus:voxel-based efficient neural implicit surface reconstruction for forward-looking sonar sonar - news:基于体素的高效神经隐式前视声纳表面重建。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108664
Shiji Qiu , Zuoqi Hu , Tiange Zhang , Zhi Liu , Junyu Dong , Qing Cai
Dense 3D reconstruction using forward-looking sonar (FLS) is essential for ocean exploration. Recent advancements in FLS-based 3D reconstruction using neural radiance fields have emerged, demonstrating promising results. However, their excessively slow reconstruction speed significantly impacts their application in real-world scenarios, primarily due to two reasons: (1) the reliance on MLPs for scene representation leads to slow training, often requiring several hours for reconstruction; and (2) the uniform sampling strategy along the elevation arc is inefficient, greatly hindering both training speed and reconstruction quality. To address these challenges, we propose a voxel-based efficient neural implicit surface reconstruction approach using FLS, featuring three key innovations: 1) Replacing MLPs with voxel grids for scene representation, utilizing a signed distance function (SDF) voxel grid to model geometry and a feature voxel grid to capture appearance. 2) Introducing a hierarchical sampling strategy along the elevation arc to improve sampling efficiency. 3) Applying SDF Gaussian convolution to the SDF voxel grid, effectively reducing noise and surface roughness. Extensive experiments demonstrate that our method significantly outperforms existing unsupervised dense FLS reconstruction techniques. Notably, our approach achieves the same reconstruction quality in just 10 minutes of training that previously required 4 hours with state-of-the-art methods, while also delivering superior results. We will open-source our code upon paper acceptance.
利用前视声呐(FLS)进行密集三维重建是海洋探测的关键。利用神经辐射场进行基于fls的三维重建的最新进展已经出现,显示出有希望的结果。然而,它们过于缓慢的重建速度显著影响了它们在现实场景中的应用,主要有两个原因:(1)依赖mlp进行场景表示导致训练缓慢,通常需要几个小时的重建;(2)沿仰角弧均匀采样策略效率低下,极大地影响了训练速度和重建质量。为了解决这些挑战,我们提出了一种基于体素的高效神经隐式表面重建方法,该方法使用FLS,具有三个关键创新:1)用体素网格代替mlp用于场景表示,利用符号距离函数(SDF)体素网格来建模几何形状,利用特征体素网格来捕获外观。2)引入沿高程弧线分层采样策略,提高采样效率。3)对SDF体素网格进行SDF高斯卷积,有效降低噪声和表面粗糙度。大量的实验表明,我们的方法明显优于现有的无监督密集FLS重建技术。值得注意的是,我们的方法只需10分钟的训练就可以实现相同的重建质量,而以前使用最先进的方法需要4个小时,同时也提供了卓越的结果。我们将在论文被接受后开放我们的代码。
{"title":"Sonar-neus:voxel-based efficient neural implicit surface reconstruction for forward-looking sonar","authors":"Shiji Qiu ,&nbsp;Zuoqi Hu ,&nbsp;Tiange Zhang ,&nbsp;Zhi Liu ,&nbsp;Junyu Dong ,&nbsp;Qing Cai","doi":"10.1016/j.neunet.2026.108664","DOIUrl":"10.1016/j.neunet.2026.108664","url":null,"abstract":"<div><div>Dense 3D reconstruction using forward-looking sonar (FLS) is essential for ocean exploration. Recent advancements in FLS-based 3D reconstruction using neural radiance fields have emerged, demonstrating promising results. However, their excessively slow reconstruction speed significantly impacts their application in real-world scenarios, primarily due to two reasons: (1) the reliance on MLPs for scene representation leads to slow training, often requiring several hours for reconstruction; and (2) the uniform sampling strategy along the elevation arc is inefficient, greatly hindering both training speed and reconstruction quality. To address these challenges, we propose a voxel-based efficient neural implicit surface reconstruction approach using FLS, featuring three key innovations: 1) Replacing MLPs with voxel grids for scene representation, utilizing a signed distance function (SDF) voxel grid to model geometry and a feature voxel grid to capture appearance. 2) Introducing a hierarchical sampling strategy along the elevation arc to improve sampling efficiency. 3) Applying SDF Gaussian convolution to the SDF voxel grid, effectively reducing noise and surface roughness. Extensive experiments demonstrate that our method significantly outperforms existing unsupervised dense FLS reconstruction techniques. Notably, our approach achieves the same reconstruction quality in just 10 minutes of training that previously required 4 hours with state-of-the-art methods, while also delivering superior results. We will open-source our code upon paper acceptance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108664"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learnable dendrite neural P systems and applications in survival prediction of glioblastoma patients 可学习树突神经P系统及其在胶质母细胞瘤患者生存预测中的应用。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108660
Xiu Yin , Xiyu Liu , Shulei Chang , Bosheng Song , Guanzhong Gong , Jiaxing Yin , Dengwang Li , Jie Xue
Current neural-like P systems use “point neurons” as the computing entities, and the computations in these neurons are simplified, ignoring the fact that, in organisms, subcellular compartments (such as neuronal dendrites) can also perform operations as independent computing units in addition to computing at the individual neuron level. The nervous system has a strong ability for optimization learning. Therefore, we propose learnable dendrite neural P (LDNP) systems with new plasticity rules, in which the dendrite structure and learning function can be adaptively changed when solving different application problems. Specifically, the dendrites of neurons are designed as dendritic trees composed of multiple dendritic branches, each of which serves as an independent computing unit. The multilevel complex topological structure of dendrites provides powerful computing capabilities for neurons. A model for predicting the overall survival of glioblastoma (GBM) patients was developed based on LDNP systems and validated on the GBM cohort from the Cancer Genome Atlas. Compared with thirteen state-of-the-art methods, the LDNP system achieves the best performance.
目前的类神经P系统使用“点神经元”作为计算实体,并且这些神经元中的计算被简化,忽略了这样一个事实,即在生物体中,亚细胞区室(如神经元树突)除了在单个神经元水平上进行计算外,还可以作为独立的计算单元执行操作。神经系统具有很强的优化学习能力。因此,我们提出了具有新的可塑性规则的可学习树突神经P (LDNP)系统,该系统的树突结构和学习功能可以在解决不同的应用问题时自适应改变。具体来说,神经元的树突被设计成由多个树突分支组成的树突树,每个树突分支作为一个独立的计算单元。树突的多层次复杂拓扑结构为神经元提供了强大的计算能力。基于LDNP系统开发了一个预测胶质母细胞瘤(GBM)患者总生存期的模型,并在来自癌症基因组图谱的GBM队列上进行了验证。与13种最先进的方法相比,LDNP系统的性能最好。
{"title":"Learnable dendrite neural P systems and applications in survival prediction of glioblastoma patients","authors":"Xiu Yin ,&nbsp;Xiyu Liu ,&nbsp;Shulei Chang ,&nbsp;Bosheng Song ,&nbsp;Guanzhong Gong ,&nbsp;Jiaxing Yin ,&nbsp;Dengwang Li ,&nbsp;Jie Xue","doi":"10.1016/j.neunet.2026.108660","DOIUrl":"10.1016/j.neunet.2026.108660","url":null,"abstract":"<div><div>Current neural-like P systems use “point neurons” as the computing entities, and the computations in these neurons are simplified, ignoring the fact that, in organisms, subcellular compartments (such as neuronal dendrites) can also perform operations as independent computing units in addition to computing at the individual neuron level. The nervous system has a strong ability for optimization learning. Therefore, we propose learnable dendrite neural P (LDNP) systems with new plasticity rules, in which the dendrite structure and learning function can be adaptively changed when solving different application problems. Specifically, the dendrites of neurons are designed as dendritic trees composed of multiple dendritic branches, each of which serves as an independent computing unit. The multilevel complex topological structure of dendrites provides powerful computing capabilities for neurons. A model for predicting the overall survival of glioblastoma (GBM) patients was developed based on LDNP systems and validated on the GBM cohort from the Cancer Genome Atlas. Compared with thirteen state-of-the-art methods, the LDNP system achieves the best performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108660"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-view contrastive representation learning on meta-path induced graphs with node features for bundle recommendation 基于节点特征的元路径诱导图的交叉视图对比表示学习。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108669
Peng Zhang , Zhendong Niu , Ru Ma , Shunpan Liang , Fuzhi Zhang
Bundle recommendation is designed to suggest a set of correlated items to a user in a holistic manner rather than recommending these items separately. Recent methods introduce contrastive learning (CL) to refine the node representations learned from different graphs (generally termed the item and bundle views) for better recommendation performance. Unfortunately, these methods have two deficiencies. Firstly, few of them explicitly model the user-user and bundle-bundle relationships simultaneously from both the item and bundle views, leading to the underutilization of high-order relationships between users (bundles). Secondly, they use InfoNCE as the contrastive loss, which overlooks the graph structure as supervised signals in defining positive (negative) samples, resulting in anchor-like nodes being treated as negative samples. To tackle these deficiencies, an approach of cross-view contrastive representation learning (CCRL) on meta-path induced graphs with node features is proposed for bundle recommendation. First, we introduce meta-path to model the user-user and bundle-bundle relationships as meta-path induced graphs with node features from both the item and bundle views. Second, we perform graph representation learning on the meta-path induced graphs with node features to procure the user (bundle) representations and introduce a contrastive loss that supports multiple positive samples to build a cross-view graph CL mechanism for refining the learned user (bundle) representations. Finally, the model is trained with a joint optimization objective. Experiments on the benchmark datasets manifest that our approach surpasses the baselines in bundle recommendation.
捆绑推荐的目的是以整体的方式向用户推荐一组相关的项目,而不是单独推荐这些项目。最近的方法引入了对比学习(CL),以改进从不同图(通常称为项目和束视图)学习到的节点表示,以获得更好的推荐性能。不幸的是,这些方法有两个缺陷。首先,它们很少同时从项视图和包视图显式地建模用户-用户和包-包关系,导致用户(包)之间的高阶关系未得到充分利用。其次,他们使用InfoNCE作为对比损失,在定义正(负)样本时忽略了图结构作为监督信号,导致锚点样节点被视为负样本。为了解决这些不足,提出了一种基于节点特征的元路径诱导图的交叉视图对比表示学习(cross-view comparative representation learning, CCRL)方法,用于包推荐。首先,我们引入元路径,将用户-用户和bundle-bundle关系建模为元路径诱导的图,其中包含来自项目视图和bundle视图的节点特征。其次,我们对具有节点特征的元路径诱导图进行图表示学习,以获得用户(束)表示,并引入支持多个正样本的对比损失,以构建交叉视图图CL机制,以精炼学习到的用户(束)表示。最后,用联合优化目标对模型进行训练。在基准数据集上的实验表明,我们的方法在包推荐方面优于基线。
{"title":"Cross-view contrastive representation learning on meta-path induced graphs with node features for bundle recommendation","authors":"Peng Zhang ,&nbsp;Zhendong Niu ,&nbsp;Ru Ma ,&nbsp;Shunpan Liang ,&nbsp;Fuzhi Zhang","doi":"10.1016/j.neunet.2026.108669","DOIUrl":"10.1016/j.neunet.2026.108669","url":null,"abstract":"<div><div>Bundle recommendation is designed to suggest a set of correlated items to a user in a holistic manner rather than recommending these items separately. Recent methods introduce <u>c</u>ontrastive <u>l</u>earning (CL) to refine the node representations learned from different graphs (generally termed the item and bundle views) for better recommendation performance. Unfortunately, these methods have two deficiencies. Firstly, few of them explicitly model the user-user and bundle-bundle relationships simultaneously from both the item and bundle views, leading to the underutilization of high-order relationships between users (bundles). Secondly, they use InfoNCE as the contrastive loss, which overlooks the graph structure as supervised signals in defining positive (negative) samples, resulting in anchor-like nodes being treated as negative samples. To tackle these deficiencies, an approach of <u>c</u>ross-view <u>c</u>ontrastive <u>r</u>epresentation <u>l</u>earning (CCRL) on meta-path induced graphs with node features is proposed for bundle recommendation. First, we introduce meta-path to model the user-user and bundle-bundle relationships as meta-path induced graphs with node features from both the item and bundle views. Second, we perform graph representation learning on the meta-path induced graphs with node features to procure the user (bundle) representations and introduce a contrastive loss that supports multiple positive samples to build a cross-view graph CL mechanism for refining the learned user (bundle) representations. Finally, the model is trained with a joint optimization objective. Experiments on the benchmark datasets manifest that our approach surpasses the baselines in bundle recommendation.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108669"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic bidirectional data recomposition for efficient road garbage segmentation in semi-supervised learning 基于半监督学习的道路垃圾高效分割的动态双向数据重组。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108655
Suheng Peng , Jiacai Liao , Libo Cao
Deep neural networks excel in road garbage segmentation but require costly pixel-level annotations. Balancing accuracy and annotation costs is a key bottleneck in urban garbage management. Semi-supervised learning (SSL) reduces the dependence on annotations by utilizing large amounts of unlabeled data. However, existing methods face a key challenge: under extreme annotation imbalance, the scarce labeled data often lacks diversity. This leads to repeated reuse during training, preventing full information exploitation and causing model performance stagnation. Specifically, we introduce the Dynamic Bidirectional Data Recomposition (DBDR) mechanism, which dynamically adjusts the bidirectional information interaction between labeled and unlabeled data to solve the problem of representation stagnation. Early training: The labeled data is integrated into the unlabeled data stream according to confidence levels, guiding the model to prioritize capturing and stabilizing basic semantic prototypes. Mid-training: A dynamic memory queue is constructed to quantify the evolution of model confidence states over time. We use dynamic thresholds and dual validation to trigger a reverse flow of knowledge from unlabeled to labeled supervision. This breaks local optima in the encoder and reshapes the semantic decision boundaries. DBDR can be integrated into any current mainstream SSL framework. On a real-world road garbage dataset, DBDR delivers a significant performance boost over all five state-of-the-art baseline models. Ablation experiments validate its key improvements in the segmentation of confusing targets (e.g., plastic, paper). This research provides an economically feasible solution for future smart city waste management technologies.
深度神经网络擅长道路垃圾分割,但需要昂贵的像素级标注。平衡准确性和标注成本是城市垃圾管理的关键瓶颈。半监督学习(SSL)通过利用大量未标记的数据来减少对注释的依赖。然而,现有方法面临着一个关键挑战:在标注极度不平衡的情况下,稀缺的标注数据往往缺乏多样性。这将导致训练期间的重复重用,从而阻止充分利用信息并导致模型性能停滞。具体来说,我们引入了动态双向数据重组(DBDR)机制,动态调整标记和未标记数据之间的双向信息交互,以解决表示停滞的问题。早期训练:根据置信度将标记的数据整合到未标记的数据流中,引导模型优先捕获和稳定基本语义原型。训练中期:构建一个动态记忆队列来量化模型置信状态随时间的演变。我们使用动态阈值和双重验证来触发从未标记监督到标记监督的反向知识流。这打破了编码器中的局部最优,并重塑了语义决策边界。DBDR可以集成到任何当前主流的SSL框架中。在现实世界的道路垃圾数据集上,DBDR比所有五个最先进的基线模型都提供了显着的性能提升。消融实验验证了其在分割混淆目标(如塑料、纸张)方面的关键改进。本研究为未来智慧城市废弃物管理技术提供了经济可行的解决方案。
{"title":"Dynamic bidirectional data recomposition for efficient road garbage segmentation in semi-supervised learning","authors":"Suheng Peng ,&nbsp;Jiacai Liao ,&nbsp;Libo Cao","doi":"10.1016/j.neunet.2026.108655","DOIUrl":"10.1016/j.neunet.2026.108655","url":null,"abstract":"<div><div>Deep neural networks excel in road garbage segmentation but require costly pixel-level annotations. Balancing accuracy and annotation costs is a key bottleneck in urban garbage management. Semi-supervised learning (SSL) reduces the dependence on annotations by utilizing large amounts of unlabeled data. However, existing methods face a key challenge: under extreme annotation imbalance, the scarce labeled data often lacks diversity. This leads to repeated reuse during training, preventing full information exploitation and causing model performance stagnation. Specifically, we introduce the Dynamic Bidirectional Data Recomposition (DBDR) mechanism, which dynamically adjusts the bidirectional information interaction between labeled and unlabeled data to solve the problem of representation stagnation. Early training: The labeled data is integrated into the unlabeled data stream according to confidence levels, guiding the model to prioritize capturing and stabilizing basic semantic prototypes. Mid-training: A dynamic memory queue is constructed to quantify the evolution of model confidence states over time. We use dynamic thresholds and dual validation to trigger a reverse flow of knowledge from unlabeled to labeled supervision. This breaks local optima in the encoder and reshapes the semantic decision boundaries. DBDR can be integrated into any current mainstream SSL framework. On a real-world road garbage dataset, DBDR delivers a significant performance boost over all five state-of-the-art baseline models. Ablation experiments validate its key improvements in the segmentation of confusing targets (e.g., plastic, paper). This research provides an economically feasible solution for future smart city waste management technologies.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108655"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency HP-GAN:利用假双胞胎和鉴别器一致性利用预训练网络进行GAN改进。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108666
Geonhui Son , Jeong Ryong Lee , Dosik Hwang
Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fréchet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: https://github.com/higun2/HP-GAN.
生成对抗网络(GANs)在提高图像合成质量方面取得了重大进展。最近的方法经常利用预训练的网络来计算感知损失或利用预训练的特征空间。在本文中,我们通过结合创新的自监督学习技术和在GAN训练期间强制判别器之间的一致性来扩展预训练网络的能力。我们提出的方法,称为HP-GAN,通过两个主要策略:FakeTwins和鉴别器一致性,有效地利用神经网络先验。FakeTwins利用预训练网络作为编码器来计算自监督损失,并通过生成的图像来训练生成器,从而能够生成更多样化和高质量的图像。此外,我们引入了一种判别器之间的一致性机制,用于评估从卷积神经网络(CNN)和视觉变形(ViT)特征网络中提取的特征映射。鉴别器一致性促进鉴别器之间的连贯学习,并通过调整鉴别器对图像质量的评估来增强训练的鲁棒性。我们对17个数据集进行了广泛的评估,包括大数据、小数据和有限数据的场景,并涵盖了各种图像域,表明HP-GAN在fr起始距离(FID)方面始终优于当前最先进的方法,在图像多样性和质量方面取得了显着改善。代码可从https://github.com/higun2/HP-GAN获得。
{"title":"HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency","authors":"Geonhui Son ,&nbsp;Jeong Ryong Lee ,&nbsp;Dosik Hwang","doi":"10.1016/j.neunet.2026.108666","DOIUrl":"10.1016/j.neunet.2026.108666","url":null,"abstract":"<div><div>Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fréchet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: <span><span>https://github.com/higun2/HP-GAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108666"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing adversarial transferability via curvature-aware penalization 通过曲率感知惩罚增强对抗可转移性。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108665
Cheng Peng , Zeze Tao , Junyu Liu , Jinjia Peng
Transfer-based attack generates adversarial examples on a surrogate model and exploits the intriguing property of transferability to deceive other unknown models, making it practical for real-world scenarios. Recent research has sought to optimize the loss surface by minimizing its maximum loss, which in practice cannot be computed exactly and is instead approximated through gradient ascent. However, the loss landscape becomes increasingly non-linear during later attack stages, making the gradient ascent less effective. To address this challenge, we propose a novel attack called Curvature-Aware Penalization (CAP), which incorporates the gradient norm and the curvature-aware term as regularization terms to maintain the flatness of the loss surface. Since directly computing the Hessian matrix is computationally expensive, we utilize the finite difference method to reduce computational complexity. Specifically, we randomly sample an example from the neighborhood and interpolate gradients at three neighboring points along the example’s gradient direction to approximate the Hessian. Additionally, to reduce the variance caused by random sampling, the combined gradients are averaged over multiple stochastic samples. Comprehensive experimental results demonstrate that our CAP can not only craft adversarial examples with enhanced transferability across various network architectures but also exhibit stronger resistance to state-of-the-art adversarial defense methods. Code is available at https://github.com/PC614/CAP.
基于转移的攻击在代理模型上生成对抗性示例,并利用可转移性的有趣属性来欺骗其他未知模型,使其在现实场景中实用。最近的研究试图通过最小化其最大损失来优化损失面,而在实践中无法精确计算,而是通过梯度上升来近似计算。然而,在后期的攻击阶段,损失情况变得越来越非线性,使得梯度上升变得不那么有效。为了解决这一挑战,我们提出了一种新的攻击方法,称为曲率感知惩罚(CAP),它将梯度范数和曲率感知项作为正则化项来保持损失表面的平坦性。由于直接计算Hessian矩阵计算量大,我们采用有限差分法来降低计算复杂度。具体来说,我们从邻域中随机抽取一个样本,并沿着样本的梯度方向在三个相邻点上插值梯度来近似Hessian。此外,为了减少随机抽样引起的方差,将组合梯度在多个随机样本上平均。综合实验结果表明,我们的CAP不仅可以在各种网络架构之间制作具有增强可转移性的对抗性示例,而且还可以对最先进的对抗性防御方法表现出更强的抵抗力。代码可从https://github.com/PC614/CAP获得。
{"title":"Enhancing adversarial transferability via curvature-aware penalization","authors":"Cheng Peng ,&nbsp;Zeze Tao ,&nbsp;Junyu Liu ,&nbsp;Jinjia Peng","doi":"10.1016/j.neunet.2026.108665","DOIUrl":"10.1016/j.neunet.2026.108665","url":null,"abstract":"<div><div>Transfer-based attack generates adversarial examples on a surrogate model and exploits the intriguing property of transferability to deceive other unknown models, making it practical for real-world scenarios. Recent research has sought to optimize the loss surface by minimizing its maximum loss, which in practice cannot be computed exactly and is instead approximated through gradient ascent. However, the loss landscape becomes increasingly non-linear during later attack stages, making the gradient ascent less effective. To address this challenge, we propose a novel attack called Curvature-Aware Penalization (CAP), which incorporates the gradient norm and the curvature-aware term as regularization terms to maintain the flatness of the loss surface. Since directly computing the Hessian matrix is computationally expensive, we utilize the finite difference method to reduce computational complexity. Specifically, we randomly sample an example from the neighborhood and interpolate gradients at three neighboring points along the example’s gradient direction to approximate the Hessian. Additionally, to reduce the variance caused by random sampling, the combined gradients are averaged over multiple stochastic samples. Comprehensive experimental results demonstrate that our CAP can not only craft adversarial examples with enhanced transferability across various network architectures but also exhibit stronger resistance to state-of-the-art adversarial defense methods. Code is available at <span><span>https://github.com/PC614/CAP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108665"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An interactive axial feature selection network for medical image classification 用于医学图像分类的交互式轴向特征选择网络。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108661
Shuai Pang, Chunhua Hu, Juan Zhao, Haifang Yu
To address the differences and correlations between features, as well as to fully utilize the importance of salient semantics in medical image classification tasks, this paper proposes an Interactive Axial Feature Selection Network (IAFSNet), aimed at improving feature representation, effectively filtering noise during classification, thereby enhancing classification performance. The paper introduces a newly designed Feature Interaction Module (FIM), which learns spatial differences between various features and enhances the interdependence and complementarity between local spatial features and global contextual semantics. Additionally, the paper implements a novel Axial Feature Selection Module (AFSM), which filters salient feature semantics from three perspectives: horizontal, vertical, and spatial. By adjusting thresholds, salient features are emphasized while irrelevant noise is eliminated, allowing these key features to cross-aggregate layer by layer and establish interactions among them, ultimately improving classification accuracy. Experimental results on four benchmark datasets demonstrate that the proposed IAFSNet exhibits excellent classification performance and robustness, significantly outperforming many existing classification methods.
为了解决特征之间的差异和相关性,并充分利用显著语义在医学图像分类任务中的重要性,本文提出了一种交互式轴向特征选择网络(IAFSNet),旨在改进特征表示,有效过滤分类过程中的噪声,从而提高分类性能。本文介绍了一种新的特征交互模块(FIM),该模块可以学习各种特征之间的空间差异,增强局部空间特征与全局上下文语义之间的相互依存和互补。此外,本文还实现了一种新的轴向特征选择模块(AFSM),该模块从水平、垂直和空间三个角度过滤显著特征语义。通过调整阈值,突出突出的特征,消除不相关的噪声,使这些关键特征逐层交叉聚集,建立相互作用,最终提高分类精度。在4个基准数据集上的实验结果表明,所提出的IAFSNet具有优异的分类性能和鲁棒性,显著优于现有的许多分类方法。
{"title":"An interactive axial feature selection network for medical image classification","authors":"Shuai Pang,&nbsp;Chunhua Hu,&nbsp;Juan Zhao,&nbsp;Haifang Yu","doi":"10.1016/j.neunet.2026.108661","DOIUrl":"10.1016/j.neunet.2026.108661","url":null,"abstract":"<div><div>To address the differences and correlations between features, as well as to fully utilize the importance of salient semantics in medical image classification tasks, this paper proposes an Interactive Axial Feature Selection Network (IAFSNet), aimed at improving feature representation, effectively filtering noise during classification, thereby enhancing classification performance. The paper introduces a newly designed Feature Interaction Module (FIM), which learns spatial differences between various features and enhances the interdependence and complementarity between local spatial features and global contextual semantics. Additionally, the paper implements a novel Axial Feature Selection Module (AFSM), which filters salient feature semantics from three perspectives: horizontal, vertical, and spatial. By adjusting thresholds, salient features are emphasized while irrelevant noise is eliminated, allowing these key features to cross-aggregate layer by layer and establish interactions among them, ultimately improving classification accuracy. Experimental results on four benchmark datasets demonstrate that the proposed IAFSNet exhibits excellent classification performance and robustness, significantly outperforming many existing classification methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108661"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning discriminative prototypes: Adaptive relation-aware refinement and patch-level contextual feature reweighting for few-shot classification 学习判别原型:自适应关系感知改进和补丁级上下文特征重加权。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108649
Mengjuan Jiang, Fanzhang Li
Few-shot learning (FSL) aims to achieve efficient classification with limited labeled samples, providing an important research paradigm for addressing the model generalization issue in data-scarce scenarios. In the metric-based FSL framework, class prototypes serve as the core transferable representation of classes, and their discriminative power directly impacts the model’s classification performance. However, existing methods face two major bottlenecks: first, traditional feature selection mechanisms use static modeling approaches that are susceptible to background noise and struggle to capture dynamic relationships between classes; second, due to limitations in the quantity and quality of labeled samples, prototype representations based on global features lack fine-grained expression of local discriminative features, limiting the prototype’s representational power. To overcome these limitations, we propose a novel framework: Learning Discriminative Prototypes (LDP). LDP includes two modules: (1) Adaptive relation-aware refinement, which dynamically models the relationships between class prototypes, highlighting the key features of each class and effectively enhancing the robustness of feature representations; (2) Patch-level contextual feature reweighting, which performs a reweighting operation on the samples through patch-level feature interactions thereby obtaining a more discriminative prototype. Experimental results demonstrate that LDP shows strong competitiveness on five datasets covering both standard and cross-domain datasets, validating its effectiveness in FSL tasks. For example, in the 1-shot setting on miniImageNet and tieredImageNet, LDP achieves over 12% accuracy improvement compared with the baseline methods; on the cross-domain dataset CUB200, the improvement reaches 6.45% in the 1-shot case. Our code is available on GitHub at https://github.com/fewshot-learner/LDP.
少射学习(Few-shot learning, FSL)旨在利用有限的标记样本实现高效分类,为解决数据稀缺场景下的模型泛化问题提供了重要的研究范式。在基于度量的FSL框架中,类原型作为类的核心可转移表征,其判别能力直接影响模型的分类性能。然而,现有的方法面临两大瓶颈:首先,传统的特征选择机制使用静态建模方法,容易受到背景噪声的影响,难以捕捉类之间的动态关系;其次,由于标注样本数量和质量的限制,基于全局特征的原型表示缺乏对局部判别特征的细粒度表达,限制了原型的表示能力。为了克服这些限制,我们提出了一个新的框架:学习判别原型(LDP)。LDP包括两个模块:(1)自适应关系感知细化,对类原型之间的关系进行动态建模,突出每个类的关键特征,有效增强特征表示的鲁棒性;(2)斑块级上下文特征重加权,通过斑块级特征交互对样本进行重加权操作,从而获得更具判别性的原型。实验结果表明,LDP在涵盖标准和跨域数据集的5个数据集上表现出较强的竞争力,验证了其在FSL任务中的有效性。例如,在miniImageNet和tieredImageNet的1镜头设置中,LDP与基线方法相比,精度提高了12%以上;在跨域数据集CUB200上,在1次射击的情况下,改进率达到6.45%。我们的代码可以在GitHub上获得https://github.com/fewshot-learner/LDP。
{"title":"Learning discriminative prototypes: Adaptive relation-aware refinement and patch-level contextual feature reweighting for few-shot classification","authors":"Mengjuan Jiang,&nbsp;Fanzhang Li","doi":"10.1016/j.neunet.2026.108649","DOIUrl":"10.1016/j.neunet.2026.108649","url":null,"abstract":"<div><div>Few-shot learning (FSL) aims to achieve efficient classification with limited labeled samples, providing an important research paradigm for addressing the model generalization issue in data-scarce scenarios. In the metric-based FSL framework, class prototypes serve as the core transferable representation of classes, and their discriminative power directly impacts the model’s classification performance. However, existing methods face two major bottlenecks: first, traditional feature selection mechanisms use static modeling approaches that are susceptible to background noise and struggle to capture dynamic relationships between classes; second, due to limitations in the quantity and quality of labeled samples, prototype representations based on global features lack fine-grained expression of local discriminative features, limiting the prototype’s representational power. To overcome these limitations, we propose a novel framework: Learning Discriminative Prototypes (LDP). LDP includes two modules: (1) Adaptive relation-aware refinement, which dynamically models the relationships between class prototypes, highlighting the key features of each class and effectively enhancing the robustness of feature representations; (2) Patch-level contextual feature reweighting, which performs a reweighting operation on the samples through patch-level feature interactions thereby obtaining a more discriminative prototype. Experimental results demonstrate that LDP shows strong competitiveness on five datasets covering both standard and cross-domain datasets, validating its effectiveness in FSL tasks. For example, in the 1-shot setting on miniImageNet and tieredImageNet, LDP achieves over 12% accuracy improvement compared with the baseline methods; on the cross-domain dataset CUB200, the improvement reaches 6.45% in the 1-shot case. Our code is available on GitHub at <span><span>https://github.com/fewshot-learner/LDP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108649"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autorep: Automatic network search with structured reparameterized based linear operation expansion and gradient proxy guided reduction Autorep:基于结构化重参数化的线性操作展开和梯度代理引导约简的自动网络搜索
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108657
Guhao Qiu , Ruoxin Chen , Zhihua Chen , Lei Dai , Ping Li , Bin Sheng
Convolution neural network and Vision Transformer have achieved large success in various computer vision tasks. However, the huge computation cost hinders its application and it is hard to design efficient methods to obtain lightweight architectures with both mannually designed strategies and automatically searching methods. In this paper, we focus on introducing the specific structural reparameterization strategy in SuperNet training to improve the performance of one-shot based neural architecture search algorithm. During the SuperNet training process, each candidate operation is expanded by a series of equivalent operation branches to fully utilize the representation potential. To alleviate the training difficulty and avoid bringing too much computation costs, the operation reduction strategy and prior sampling strategy are used after validating the sampled subnetworks. The operation reduction strategy is to remove the low-effect extended linear layer. The reduction step needs to firstly select the candidate operation based on SynFlow proxy and then select the extended linear layer from the selected operation based on the accuracy difference before and after removal.
卷积神经网络和视觉变压器在各种计算机视觉任务中取得了很大的成功。然而,巨大的计算成本阻碍了它的应用,并且很难设计出既采用人工设计策略又采用自动搜索方法的高效方法来获得轻量级体系结构。在本文中,我们重点介绍了SuperNet训练中特定的结构重参数化策略,以提高基于单次神经结构搜索算法的性能。在超级网络训练过程中,每个候选操作通过一系列等价的操作分支进行扩展,以充分利用表征潜力。为了降低训练难度和避免带来过多的计算成本,在对采样子网进行验证后,采用了运算缩减策略和先验采样策略。操作缩减策略是去除低效果的扩展线性层。约简步骤需要首先基于SynFlow代理选择候选操作,然后根据去除前后的精度差异从所选操作中选择扩展的线性层。
{"title":"Autorep: Automatic network search with structured reparameterized based linear operation expansion and gradient proxy guided reduction","authors":"Guhao Qiu ,&nbsp;Ruoxin Chen ,&nbsp;Zhihua Chen ,&nbsp;Lei Dai ,&nbsp;Ping Li ,&nbsp;Bin Sheng","doi":"10.1016/j.neunet.2026.108657","DOIUrl":"10.1016/j.neunet.2026.108657","url":null,"abstract":"<div><div>Convolution neural network and Vision Transformer have achieved large success in various computer vision tasks. However, the huge computation cost hinders its application and it is hard to design efficient methods to obtain lightweight architectures with both mannually designed strategies and automatically searching methods. In this paper, we focus on introducing the specific structural reparameterization strategy in SuperNet training to improve the performance of one-shot based neural architecture search algorithm. During the SuperNet training process, each candidate operation is expanded by a series of equivalent operation branches to fully utilize the representation potential. To alleviate the training difficulty and avoid bringing too much computation costs, the operation reduction strategy and prior sampling strategy are used after validating the sampled subnetworks. The operation reduction strategy is to remove the low-effect extended linear layer. The reduction step needs to firstly select the candidate operation based on SynFlow proxy and then select the extended linear layer from the selected operation based on the accuracy difference before and after removal.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108657"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical ranking in hyperbolic space: A novel approach to metric learning 双曲空间中的层次排序:度量学习的新方法
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.neunet.2026.108658
Shuda Zhang , Huiying Li
The integration of deep metric learning with hyperbolic geometry has shown significant potential for capturing complex hierarchical relationships. However, existing clustering-based methods struggle to fully leverage the properties of hyperbolic space, particularly due to the challenge of optimizing both cluster centers and distance metrics in exponentially expanding spaces without true hierarchical labels. Additionally, the computational complexity of Riemannian operations makes maintaining hierarchical structures costly, especially for large datasets. To address these challenges, we propose a novel hierarchical ranking framework that utilizes latent hierarchical information without relying on explicit clustering. This framework introduces the Hierarchical Ranking Generation (HRG) strategy and Hierarchical Ranking Loss (HRL). HRG generates ranking labels based on the semantic relationships between classes within an implicit global hierarchy, while HRL optimizes these rankings across multiple hierarchical levels, enabling the model to learn richer, more nuanced representations. Our approach significantly improves performance, outperforming the state-of-the-art by 2.4% on CUB-200-2011 and 1.6% on Cars-196 at Recall@1.
深度度量学习与双曲几何的集成显示了捕获复杂层次关系的巨大潜力。然而,现有的基于聚类的方法很难充分利用双曲空间的特性,特别是由于在没有真正层次标签的指数扩展空间中优化聚类中心和距离度量的挑战。此外,黎曼运算的计算复杂性使得维护分层结构的成本很高,特别是对于大型数据集。为了解决这些挑战,我们提出了一种新的层次排序框架,该框架利用潜在的层次信息而不依赖于显式聚类。该框架引入了分级排名生成(HRG)策略和分级排名损失(HRL)策略。HRG基于隐式全局层次结构中类之间的语义关系生成排名标签,而HRL跨多个层次结构级别优化这些排名,使模型能够学习更丰富、更细致的表示。我们的方法显著提高了性能,在CUB-200-2011和Recall@1上的Cars-196上的表现分别比最先进的方法高出2.4%和1.6%。
{"title":"Hierarchical ranking in hyperbolic space: A novel approach to metric learning","authors":"Shuda Zhang ,&nbsp;Huiying Li","doi":"10.1016/j.neunet.2026.108658","DOIUrl":"10.1016/j.neunet.2026.108658","url":null,"abstract":"<div><div>The integration of deep metric learning with hyperbolic geometry has shown significant potential for capturing complex hierarchical relationships. However, existing clustering-based methods struggle to fully leverage the properties of hyperbolic space, particularly due to the challenge of optimizing both cluster centers and distance metrics in exponentially expanding spaces without true hierarchical labels. Additionally, the computational complexity of Riemannian operations makes maintaining hierarchical structures costly, especially for large datasets. To address these challenges, we propose a novel hierarchical ranking framework that utilizes latent hierarchical information without relying on explicit clustering. This framework introduces the Hierarchical Ranking Generation (HRG) strategy and Hierarchical Ranking Loss (HRL). HRG generates ranking labels based on the semantic relationships between classes within an implicit global hierarchy, while HRL optimizes these rankings across multiple hierarchical levels, enabling the model to learn richer, more nuanced representations. Our approach significantly improves performance, outperforming the state-of-the-art by 2.4% on CUB-200-2011 and 1.6% on Cars-196 at Recall@1.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108658"},"PeriodicalIF":6.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1