首页 > 最新文献

Neural Networks最新文献

英文 中文
Multi-level feature fusion networks for smoke recognition in remote sensing imagery. 多尺度特征融合网络用于遥感图像烟雾识别。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 Epub Date: 2025-01-04 DOI: 10.1016/j.neunet.2024.107112
Yupeng Wang, Yongli Wang, Zaki Ahmad Khan, Anqi Huang, Jianghui Sang

Smoke is a critical indicator of forest fires, often detectable before flames ignite. Accurate smoke identification in remote sensing images is vital for effective forest fire monitoring within Internet of Things (IoT) systems. However, existing detection methods frequently falter in complex real-world scenarios, where variable smoke shapes and sizes, intricate backgrounds, and smoke-like phenomena (e.g., clouds and haze) lead to missed detections and false alarms. To address these challenges, we propose the Multi-level Feature Fusion Network (MFFNet), a novel framework grounded in contrastive learning. MFFNet begins by extracting multi-scale features from remote sensing images using a pre-trained ConvNeXt model, capturing information across different levels of granularity to accommodate variations in smoke appearance. The Attention Feature Enhancement Module further refines these multi-scale features, enhancing fine-grained, discriminative attributes relevant to smoke detection. Subsequently, the Bilinear Feature Fusion Module combines these enriched features, effectively reducing background interference and improving the model's ability to distinguish smoke from visually similar phenomena. Finally, contrastive feature learning is employed to improve robustness against intra-class variations by focusing on unique regions within the smoke patterns. Evaluated on the benchmark dataset USTC_SmokeRS, MFFNet achieves an accuracy of 98.87%. Additionally, our model demonstrates a detection rate of 94.54% on the extended E_SmokeRS dataset, with a low false alarm rate of 3.30%. These results highlight the effectiveness of MFFNet in recognizing smoke in remote sensing images, surpassing existing methodologies. The code is accessible at https://github.com/WangYuPeng1/MFFNet.

烟雾是森林火灾的关键指标,通常在火焰点燃之前就可以探测到。在物联网(IoT)系统中,准确的遥感图像烟雾识别对于有效的森林火灾监测至关重要。然而,现有的检测方法在复杂的现实场景中经常出现问题,在这些场景中,不同的烟雾形状和大小、复杂的背景和类似烟雾的现象(例如云和雾霾)会导致漏检和误报。为了解决这些挑战,我们提出了多层次特征融合网络(MFFNet),这是一种基于对比学习的新框架。MFFNet首先使用预训练的ConvNeXt模型从遥感图像中提取多尺度特征,捕获不同粒度级别的信息,以适应烟雾外观的变化。注意特征增强模块进一步细化这些多尺度特征,增强与烟雾探测相关的细粒度、判别属性。随后,双线性特征融合模块将这些丰富的特征结合起来,有效地减少了背景干扰,提高了模型区分烟雾和视觉相似现象的能力。最后,通过关注烟雾模式内的独特区域,采用对比特征学习来提高对类内变化的鲁棒性。在基准数据集USTC_SmokeRS上进行评估,MFFNet的准确率达到了98.87%。此外,我们的模型在扩展的E_SmokeRS数据集上的检测率为94.54%,虚警率为3.30%。这些结果突出了MFFNet在识别遥感图像中的烟雾方面的有效性,超越了现有的方法。代码可在https://github.com/WangYuPeng1/MFFNet上访问。
{"title":"Multi-level feature fusion networks for smoke recognition in remote sensing imagery.","authors":"Yupeng Wang, Yongli Wang, Zaki Ahmad Khan, Anqi Huang, Jianghui Sang","doi":"10.1016/j.neunet.2024.107112","DOIUrl":"10.1016/j.neunet.2024.107112","url":null,"abstract":"<p><p>Smoke is a critical indicator of forest fires, often detectable before flames ignite. Accurate smoke identification in remote sensing images is vital for effective forest fire monitoring within Internet of Things (IoT) systems. However, existing detection methods frequently falter in complex real-world scenarios, where variable smoke shapes and sizes, intricate backgrounds, and smoke-like phenomena (e.g., clouds and haze) lead to missed detections and false alarms. To address these challenges, we propose the Multi-level Feature Fusion Network (MFFNet), a novel framework grounded in contrastive learning. MFFNet begins by extracting multi-scale features from remote sensing images using a pre-trained ConvNeXt model, capturing information across different levels of granularity to accommodate variations in smoke appearance. The Attention Feature Enhancement Module further refines these multi-scale features, enhancing fine-grained, discriminative attributes relevant to smoke detection. Subsequently, the Bilinear Feature Fusion Module combines these enriched features, effectively reducing background interference and improving the model's ability to distinguish smoke from visually similar phenomena. Finally, contrastive feature learning is employed to improve robustness against intra-class variations by focusing on unique regions within the smoke patterns. Evaluated on the benchmark dataset USTC_SmokeRS, MFFNet achieves an accuracy of 98.87%. Additionally, our model demonstrates a detection rate of 94.54% on the extended E_SmokeRS dataset, with a low false alarm rate of 3.30%. These results highlight the effectiveness of MFFNet in recognizing smoke in remote sensing images, surpassing existing methodologies. The code is accessible at https://github.com/WangYuPeng1/MFFNet.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107112"},"PeriodicalIF":6.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICH-PRNet: a cross-modal intracerebral haemorrhage prognostic prediction method using joint-attention interaction mechanism. ICH-PRNet:基于联合注意相互作用机制的跨模式脑出血预后预测方法。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 Epub Date: 2025-01-06 DOI: 10.1016/j.neunet.2024.107096
Xinlei Yu, Ahmed Elazab, Ruiquan Ge, Jichao Zhu, Lingyan Zhang, Gangyong Jia, Qing Wu, Xiang Wan, Lihua Li, Changmiao Wang

Accurately predicting intracerebral hemorrhage (ICH) prognosis is a critical and indispensable step in the clinical management of patients post-ICH. Recently, integrating artificial intelligence, particularly deep learning, has significantly enhanced prediction accuracy and alleviated neurosurgeons from the burden of manual prognosis assessment. However, uni-modal methods have shown suboptimal performance due to the intricate pathophysiology of the ICH. On the other hand, existing cross-modal approaches that incorporate tabular data have often failed to effectively extract complementary information and cross-modal features between modalities, thereby limiting their prognostic capabilities. This study introduces a novel cross-modal network, ICH-PRNet, designed to predict ICH prognosis outcomes. Specifically, we propose a joint-attention interaction encoder that effectively integrates computed tomography images and clinical texts within a unified representational space. Additionally, we define a multi-loss function comprising three components to comprehensively optimize cross-modal fusion capabilities. To balance the training process, we employ a self-adaptive dynamic prioritization algorithm that adjusts the weights of each component, accordingly. Our model, through these innovative designs, establishes robust semantic connections between modalities and uncovers rich, complementary cross-modal information, thereby achieving superior prediction results. Extensive experimental results and comparisons with state-of-the-art methods on both in-house and publicly available datasets unequivocally demonstrate the superiority and efficacy of the proposed method. Our code is at https://github.com/YU-deep/ICH-PRNet.git.

准确预测脑出血预后是脑出血后患者临床治疗中至关重要和不可缺少的一步。近年来,人工智能特别是深度学习的融合显著提高了预测精度,减轻了神经外科医生人工预后评估的负担。然而,由于脑出血复杂的病理生理,单模态方法表现不佳。另一方面,现有的包含表格数据的跨模态方法往往无法有效地提取模态之间的互补信息和跨模态特征,从而限制了它们的预测能力。本研究介绍了一种新的跨模式网络ICH- prnet,旨在预测脑出血预后。具体而言,我们提出了一种联合关注交互编码器,该编码器有效地将计算机断层扫描图像和临床文本集成在统一的表示空间内。此外,我们定义了一个包含三个组成部分的多损失函数,以全面优化跨模态融合能力。为了平衡训练过程,我们采用了一种自适应动态优先排序算法,该算法相应地调整每个组件的权重。通过这些创新设计,我们的模型在模态之间建立了鲁棒的语义连接,并揭示了丰富的、互补的跨模态信息,从而获得了卓越的预测结果。广泛的实验结果和与内部和公开可用数据集上最先进的方法的比较明确地证明了所提出方法的优越性和有效性。我们的代码在https://github.com/YU-deep/ICH-PRNet.git。
{"title":"ICH-PRNet: a cross-modal intracerebral haemorrhage prognostic prediction method using joint-attention interaction mechanism.","authors":"Xinlei Yu, Ahmed Elazab, Ruiquan Ge, Jichao Zhu, Lingyan Zhang, Gangyong Jia, Qing Wu, Xiang Wan, Lihua Li, Changmiao Wang","doi":"10.1016/j.neunet.2024.107096","DOIUrl":"10.1016/j.neunet.2024.107096","url":null,"abstract":"<p><p>Accurately predicting intracerebral hemorrhage (ICH) prognosis is a critical and indispensable step in the clinical management of patients post-ICH. Recently, integrating artificial intelligence, particularly deep learning, has significantly enhanced prediction accuracy and alleviated neurosurgeons from the burden of manual prognosis assessment. However, uni-modal methods have shown suboptimal performance due to the intricate pathophysiology of the ICH. On the other hand, existing cross-modal approaches that incorporate tabular data have often failed to effectively extract complementary information and cross-modal features between modalities, thereby limiting their prognostic capabilities. This study introduces a novel cross-modal network, ICH-PRNet, designed to predict ICH prognosis outcomes. Specifically, we propose a joint-attention interaction encoder that effectively integrates computed tomography images and clinical texts within a unified representational space. Additionally, we define a multi-loss function comprising three components to comprehensively optimize cross-modal fusion capabilities. To balance the training process, we employ a self-adaptive dynamic prioritization algorithm that adjusts the weights of each component, accordingly. Our model, through these innovative designs, establishes robust semantic connections between modalities and uncovers rich, complementary cross-modal information, thereby achieving superior prediction results. Extensive experimental results and comparisons with state-of-the-art methods on both in-house and publicly available datasets unequivocally demonstrate the superiority and efficacy of the proposed method. Our code is at https://github.com/YU-deep/ICH-PRNet.git.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107096"},"PeriodicalIF":6.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identity Model Transformation for boosting performance and efficiency in object detection network. 身份模型转换提高目标检测网络的性能和效率。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 Epub Date: 2024-12-31 DOI: 10.1016/j.neunet.2024.107098
Zhongyuan Lu, Jin Liu, Miaozhong Xu

Modifying the structure of an existing network is a common method to further improve the performance of the network. However, modifying some layers in network often results in pre-trained weight mismatch, and fine-tune process is time-consuming and resource-inefficient. To address this issue, we propose a novel technique called Identity Model Transformation (IMT), which keep the output before and after transformation in an equal form by rigorous algebraic transformations. This approach ensures the preservation of the original model's performance when modifying layers. Additionally, IMT significantly reduces the total training time required to achieve optimal results while further enhancing network performance. IMT has established a bridge for rapid transformation between model architectures, enabling a model to quickly perform analytic continuation and derive a family of tree-like models with better performance. This model family possesses a greater potential for optimization improvements compared to a single model. Extensive experiments across various object detection tasks validated the effectiveness and efficiency of our proposed IMT solution, which saved 94.76% time in fine-tuning the basic model YOLOv4-Rot on DOTA 1.5 dataset, and by using the IMT method, we saw stable performance improvements of 9.89%, 6.94%, 2.36%, and 4.86% on the four datasets: AI-TOD, DOTA1.5, coco2017, and MRSAText, respectively.

修改现有网络的结构是进一步提高网络性能的常用方法。然而,修改网络中的某些层往往会导致预训练权值不匹配,并且微调过程耗时且资源效率低。为了解决这个问题,我们提出了一种称为单位模型变换(IMT)的新技术,该技术通过严格的代数变换使变换前后的输出保持相等的形式。这种方法保证了在修改图层时保持原始模型的性能。此外,IMT显著减少了获得最佳结果所需的总训练时间,同时进一步提高了网络性能。IMT为模型体系结构之间的快速转换建立了桥梁,使模型能够快速进行解析延拓,并派生出一系列性能更好的树状模型。与单个模型相比,该模型族具有更大的优化改进潜力。在各种目标检测任务中进行的大量实验验证了我们提出的IMT方案的有效性和效率,在DOTA1.5数据集上对基本模型YOLOv4-Rot进行调优节省了94.76%的时间,并且通过使用IMT方法,我们在AI-TOD、DOTA1.5、coco2017和MRSAText四个数据集上分别实现了9.89%、6.94%、2.36%和4.86%的稳定性能提升。
{"title":"Identity Model Transformation for boosting performance and efficiency in object detection network.","authors":"Zhongyuan Lu, Jin Liu, Miaozhong Xu","doi":"10.1016/j.neunet.2024.107098","DOIUrl":"10.1016/j.neunet.2024.107098","url":null,"abstract":"<p><p>Modifying the structure of an existing network is a common method to further improve the performance of the network. However, modifying some layers in network often results in pre-trained weight mismatch, and fine-tune process is time-consuming and resource-inefficient. To address this issue, we propose a novel technique called Identity Model Transformation (IMT), which keep the output before and after transformation in an equal form by rigorous algebraic transformations. This approach ensures the preservation of the original model's performance when modifying layers. Additionally, IMT significantly reduces the total training time required to achieve optimal results while further enhancing network performance. IMT has established a bridge for rapid transformation between model architectures, enabling a model to quickly perform analytic continuation and derive a family of tree-like models with better performance. This model family possesses a greater potential for optimization improvements compared to a single model. Extensive experiments across various object detection tasks validated the effectiveness and efficiency of our proposed IMT solution, which saved 94.76% time in fine-tuning the basic model YOLOv4-Rot on DOTA 1.5 dataset, and by using the IMT method, we saw stable performance improvements of 9.89%, 6.94%, 2.36%, and 4.86% on the four datasets: AI-TOD, DOTA1.5, coco2017, and MRSAText, respectively.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107098"},"PeriodicalIF":6.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergistic learning with multi-task DeepONet for efficient PDE problem solving. 协同学习与多任务DeepONet的高效PDE问题求解。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 Epub Date: 2025-01-03 DOI: 10.1016/j.neunet.2024.107113
Varun Kumar, Somdatta Goswami, Katiana Kontolati, Michael D Shields, George Em Karniadakis

Multi-task learning (MTL) is an inductive transfer mechanism designed to leverage useful information from multiple tasks to improve generalization performance compared to single-task learning. It has been extensively explored in traditional machine learning to address issues such as data sparsity and overfitting in neural networks. In this work, we apply MTL to problems in science and engineering governed by partial differential equations (PDEs). However, implementing MTL in this context is complex, as it requires task-specific modifications to accommodate various scenarios representing different physical processes. To this end, we present a multi-task deep operator network (MT-DeepONet) to learn solutions across various functional forms of source terms in a PDE and multiple geometries in a single concurrent training session. We introduce modifications in the branch network of the vanilla DeepONet to account for various functional forms of a parameterized coefficient in a PDE. Additionally, we handle parameterized geometries by introducing a binary mask in the branch network and incorporating it into the loss term to improve convergence and generalization to new geometry tasks. Our approach is demonstrated on three benchmark problems: (1) learning different functional forms of the source term in the Fisher equation; (2) learning multiple geometries in a 2D Darcy Flow problem and showcasing better transfer learning capabilities to new geometries; and (3) learning 3D parameterized geometries for a heat transfer problem and demonstrate the ability to predict on new but similar geometries. Our MT-DeepONet framework offers a novel approach to solving PDE problems in engineering and science under a unified umbrella based on synergistic learning that reduces the overall training cost for neural operators.

多任务学习(MTL)是一种归纳迁移机制,旨在利用多任务中的有用信息来提高单任务学习的泛化性能。它在传统机器学习中被广泛探索,以解决神经网络中的数据稀疏性和过拟合等问题。在这项工作中,我们将MTL应用于偏微分方程(PDEs)控制的科学和工程问题。然而,在这种上下文中实现MTL是复杂的,因为它需要特定于任务的修改,以适应表示不同物理过程的各种场景。为此,我们提出了一个多任务深度算子网络(MT-DeepONet),以在单个并发训练会话中学习PDE中源项的各种功能形式和多个几何形状的解决方案。我们在香草DeepONet的分支网络中引入修改,以考虑PDE中参数化系数的各种函数形式。此外,我们通过在分支网络中引入二进制掩码并将其纳入损失项来处理参数化几何,以提高对新几何任务的收敛性和泛化性。我们的方法在三个基准问题上得到了证明:(1)学习Fisher方程中源项的不同函数形式;(2)在二维达西流问题中学习多种几何形状,并展示更好的新几何形状的迁移学习能力;(3)学习一个传热问题的三维参数化几何,并展示在新的但类似的几何上预测的能力。我们的MT-DeepONet框架提供了一种新的方法,在基于协同学习的统一框架下解决工程和科学中的PDE问题,从而降低了神经算子的总体训练成本。
{"title":"Synergistic learning with multi-task DeepONet for efficient PDE problem solving.","authors":"Varun Kumar, Somdatta Goswami, Katiana Kontolati, Michael D Shields, George Em Karniadakis","doi":"10.1016/j.neunet.2024.107113","DOIUrl":"10.1016/j.neunet.2024.107113","url":null,"abstract":"<p><p>Multi-task learning (MTL) is an inductive transfer mechanism designed to leverage useful information from multiple tasks to improve generalization performance compared to single-task learning. It has been extensively explored in traditional machine learning to address issues such as data sparsity and overfitting in neural networks. In this work, we apply MTL to problems in science and engineering governed by partial differential equations (PDEs). However, implementing MTL in this context is complex, as it requires task-specific modifications to accommodate various scenarios representing different physical processes. To this end, we present a multi-task deep operator network (MT-DeepONet) to learn solutions across various functional forms of source terms in a PDE and multiple geometries in a single concurrent training session. We introduce modifications in the branch network of the vanilla DeepONet to account for various functional forms of a parameterized coefficient in a PDE. Additionally, we handle parameterized geometries by introducing a binary mask in the branch network and incorporating it into the loss term to improve convergence and generalization to new geometry tasks. Our approach is demonstrated on three benchmark problems: (1) learning different functional forms of the source term in the Fisher equation; (2) learning multiple geometries in a 2D Darcy Flow problem and showcasing better transfer learning capabilities to new geometries; and (3) learning 3D parameterized geometries for a heat transfer problem and demonstrate the ability to predict on new but similar geometries. Our MT-DeepONet framework offers a novel approach to solving PDE problems in engineering and science under a unified umbrella based on synergistic learning that reduces the overall training cost for neural operators.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107113"},"PeriodicalIF":6.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Recommender Systems through Imputation and Social-Aware Graph Convolutional Neural Network. 基于归算和社会感知图卷积神经网络的推荐系统增强。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 Epub Date: 2024-12-31 DOI: 10.1016/j.neunet.2024.107071
Azadeh Faroughi, Parham Moradi, Mahdi Jalili

Recommendation systems are vital tools for helping users discover content that suits their interests. Collaborative filtering methods are one of the techniques employed for analyzing interactions between users and items, which are typically stored in a sparse matrix. This inherent sparsity poses a challenge because it necessitates accurately and effectively filling in these gaps to provide users with meaningful and personalized recommendations. Our solution addresses sparsity in recommendations by incorporating diverse data sources, including trust statements and an imputation graph. The trust graph captures user relationships and trust levels, working in conjunction with an imputation graph, which is constructed by estimating the missing rates of each user based on the user-item matrix using the average rates of the most similar users. Combined with the user-item rating graph, an attention mechanism fine tunes the influence of these graphs, resulting in more personalized and effective recommendations. Our method consistently outperforms state-of-the-art recommenders in real-world dataset evaluations, underscoring its potential to strengthen recommendation systems and mitigate sparsity challenges.

推荐系统是帮助用户发现符合他们兴趣的内容的重要工具。协同过滤方法是用于分析用户和项目之间交互的技术之一,通常存储在稀疏矩阵中。这种固有的稀疏性带来了挑战,因为它需要准确有效地填补这些空白,以便为用户提供有意义的个性化建议。我们的解决方案通过合并不同的数据源(包括信任语句和imputation图)来解决推荐中的稀疏性问题。信任图捕获用户关系和信任级别,并与一个imputation图一起工作,该图是通过使用最相似用户的平均比率根据用户-项目矩阵估计每个用户的缺失率来构建的。结合用户-物品评分图,注意力机制可以微调这些图的影响,从而产生更个性化和更有效的推荐。我们的方法在现实世界的数据集评估中始终优于最先进的推荐器,强调了其加强推荐系统和缓解稀疏性挑战的潜力。
{"title":"Enhancing Recommender Systems through Imputation and Social-Aware Graph Convolutional Neural Network.","authors":"Azadeh Faroughi, Parham Moradi, Mahdi Jalili","doi":"10.1016/j.neunet.2024.107071","DOIUrl":"10.1016/j.neunet.2024.107071","url":null,"abstract":"<p><p>Recommendation systems are vital tools for helping users discover content that suits their interests. Collaborative filtering methods are one of the techniques employed for analyzing interactions between users and items, which are typically stored in a sparse matrix. This inherent sparsity poses a challenge because it necessitates accurately and effectively filling in these gaps to provide users with meaningful and personalized recommendations. Our solution addresses sparsity in recommendations by incorporating diverse data sources, including trust statements and an imputation graph. The trust graph captures user relationships and trust levels, working in conjunction with an imputation graph, which is constructed by estimating the missing rates of each user based on the user-item matrix using the average rates of the most similar users. Combined with the user-item rating graph, an attention mechanism fine tunes the influence of these graphs, resulting in more personalized and effective recommendations. Our method consistently outperforms state-of-the-art recommenders in real-world dataset evaluations, underscoring its potential to strengthen recommendation systems and mitigate sparsity challenges.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107071"},"PeriodicalIF":6.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual selective fusion transformer network for hyperspectral image classification
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neunet.2025.107311
Yichu Xu , Di Wang , Lefei Zhang , Liangpei Zhang
Transformer has achieved satisfactory results in the field of hyperspectral image (HSI) classification. However, existing Transformer models face two key challenges when dealing with HSI scenes characterized by diverse land cover types and rich spectral information: (1) A fixed receptive field overlooks the effective contextual scales required by various HSI objects; (2) invalid self-attention features in context fusion affect model performance. To address these limitations, we propose a novel Dual Selective Fusion Transformer Network (DSFormer) for HSI classification. DSFormer achieves joint spatial and spectral contextual modeling by flexibly selecting and fusing features across different receptive fields, effectively reducing unnecessary information interference by focusing on the most relevant spatial–spectral tokens. Specifically, we design a Kernel Selective Fusion Transformer Block (KSFTB) to learn an optimal receptive field by adaptively fusing spatial and spectral features across different scales, enhancing the model’s ability to accurately identify diverse HSI objects. Additionally, we introduce a Token Selective Fusion Transformer Block (TSFTB), which strategically selects and combines essential tokens during the spatial–spectral self-attention fusion process to capture the most crucial contexts. Extensive experiments conducted on four benchmark HSI datasets demonstrate that the proposed DSFormer significantly improves land cover classification accuracy, outperforming existing state-of-the-art methods. Specifically, DSFormer achieves overall accuracies of 96.59%, 97.66%, 95.17%, and 94.59% in the Pavia University, Houston, Indian Pines, and Whu-HongHu datasets, respectively, reflecting improvements of 3.19%, 1.14%, 0.91%, and 2.80% over the previous model. The code will be available online at https://github.com/YichuXu/DSFormer.
{"title":"Dual selective fusion transformer network for hyperspectral image classification","authors":"Yichu Xu ,&nbsp;Di Wang ,&nbsp;Lefei Zhang ,&nbsp;Liangpei Zhang","doi":"10.1016/j.neunet.2025.107311","DOIUrl":"10.1016/j.neunet.2025.107311","url":null,"abstract":"<div><div>Transformer has achieved satisfactory results in the field of hyperspectral image (HSI) classification. However, existing Transformer models face two key challenges when dealing with HSI scenes characterized by diverse land cover types and rich spectral information: (1) A fixed receptive field overlooks the effective contextual scales required by various HSI objects; (2) invalid self-attention features in context fusion affect model performance. To address these limitations, we propose a novel Dual Selective Fusion Transformer Network (DSFormer) for HSI classification. DSFormer achieves joint spatial and spectral contextual modeling by flexibly selecting and fusing features across different receptive fields, effectively reducing unnecessary information interference by focusing on the most relevant spatial–spectral tokens. Specifically, we design a Kernel Selective Fusion Transformer Block (KSFTB) to learn an optimal receptive field by adaptively fusing spatial and spectral features across different scales, enhancing the model’s ability to accurately identify diverse HSI objects. Additionally, we introduce a Token Selective Fusion Transformer Block (TSFTB), which strategically selects and combines essential tokens during the spatial–spectral self-attention fusion process to capture the most crucial contexts. Extensive experiments conducted on four benchmark HSI datasets demonstrate that the proposed DSFormer significantly improves land cover classification accuracy, outperforming existing state-of-the-art methods. Specifically, DSFormer achieves overall accuracies of 96.59%, 97.66%, 95.17%, and 94.59% in the Pavia University, Houston, Indian Pines, and Whu-HongHu datasets, respectively, reflecting improvements of 3.19%, 1.14%, 0.91%, and 2.80% over the previous model. The code will be available online at <span><span>https://github.com/YichuXu/DSFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107311"},"PeriodicalIF":6.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ABVS breast tumour segmentation via integrating CNN with dilated sampling self-attention and feature interaction Transformer
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neunet.2025.107312
Yiyao Liu , Jinyao Li , Yi Yang , Cheng Zhao , Yongtao Zhang , Peng Yang , Lei Dong , Xiaofei Deng , Ting Zhu , Tianfu Wang , Wei Jiang , Baiying Lei
Given the rapid increase in breast cancer incidence, the Automated Breast Volume Scanner (ABVS) is developed to screen breast tumours efficiently and accurately. However, reviewing ABVS images is a challenging task owing to the significant variations in sizes and shapes of breast tumours. We propose a novel 3D segmentation network (i.e., DST-C) that combines a convolutional neural network (CNN) with a dilated sampling self-attention Transformer (DST). In our network, the global features extracted from the DST branch are guided by the detailed local information provided by the CNN branch, which adapts to the diversity of tumour size and morphology. For medical images, especially ABVS images, the scarcity of annotation leads to difficulty in model training. Therefore, a self-supervised learning method based on a dual-path approach for mask image modelling is introduced to generate valuable representations of images. In addition, a unique postprocessing method is proposed to reduce the false-positive rate and improve the sensitivity simultaneously. The experimental results demonstrate that our model has achieved promising 3D segmentation and detection performance using our in-house dataset. Our code is available at: https://github.com/magnetliu/dstc-net.
{"title":"ABVS breast tumour segmentation via integrating CNN with dilated sampling self-attention and feature interaction Transformer","authors":"Yiyao Liu ,&nbsp;Jinyao Li ,&nbsp;Yi Yang ,&nbsp;Cheng Zhao ,&nbsp;Yongtao Zhang ,&nbsp;Peng Yang ,&nbsp;Lei Dong ,&nbsp;Xiaofei Deng ,&nbsp;Ting Zhu ,&nbsp;Tianfu Wang ,&nbsp;Wei Jiang ,&nbsp;Baiying Lei","doi":"10.1016/j.neunet.2025.107312","DOIUrl":"10.1016/j.neunet.2025.107312","url":null,"abstract":"<div><div>Given the rapid increase in breast cancer incidence, the Automated Breast Volume Scanner (ABVS) is developed to screen breast tumours efficiently and accurately. However, reviewing ABVS images is a challenging task owing to the significant variations in sizes and shapes of breast tumours. We propose a novel 3D segmentation network (i.e., DST-C) that combines a convolutional neural network (CNN) with a dilated sampling self-attention Transformer (DST). In our network, the global features extracted from the DST branch are guided by the detailed local information provided by the CNN branch, which adapts to the diversity of tumour size and morphology. For medical images, especially ABVS images, the scarcity of annotation leads to difficulty in model training. Therefore, a self-supervised learning method based on a dual-path approach for mask image modelling is introduced to generate valuable representations of images. In addition, a unique postprocessing method is proposed to reduce the false-positive rate and improve the sensitivity simultaneously. The experimental results demonstrate that our model has achieved promising 3D segmentation and detection performance using our in-house dataset. Our code is available at: <span><span>https://github.com/magnetliu/dstc-net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107312"},"PeriodicalIF":6.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRTN: Dual Relation Transformer Network with feature erasure and contrastive learning for multi-label image classification
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neunet.2025.107309
Wei Zhou, Kang Lin, Zhijie Zheng, Dihu Chen, Tao Su, Haifeng Hu
The objective of multi-label image classification (MLIC) task is to simultaneously identify multiple objects present in an image. Several researchers directly flatten 2D feature maps into 1D grid feature sequences, and utilize Transformer encoder to capture the correlations of grid features to learn object relationships. Although obtaining promising results, these Transformer-based methods lose spatial information. In addition, current attention-based models often focus only on salient feature regions, but ignore other potential useful features that contribute to MLIC task. To tackle these problems, we present a novel Dual Relation Transformer Network (DRTN) for MLIC task, which can be trained in an end-to-end manner. Concretely, to compensate for the loss of spatial information of grid features resulting from the flattening operation, we adopt a grid aggregation scheme to generate pseudo-region features, which does not need to make additional expensive annotations to train object detector. Then, a new dual relation enhancement (DRE) module is proposed to capture correlations between objects using two different visual features, thereby complementing the advantages provided by both grid and pseudo-region features. After that, we design a new feature enhancement and erasure (FEE) module to learn discriminative features and mine additional potential valuable features. By using attention mechanism to discover the most salient feature regions and removing them with region-level erasure strategy, our FEE module is able to mine other potential useful features from the remaining parts. Further, we devise a novel contrastive learning (CL) module to encourage the foregrounds of salient and potential features to be closer, while pushing their foregrounds further away from background features. This manner compels our model to learn discriminative and valuable features more comprehensively. Extensive experiments demonstrate that DRTN method surpasses current MLIC models on three challenging benchmarks, i.e., MS-COCO 2014, PASCAL VOC 2007, and NUS-WIDE datasets.
{"title":"DRTN: Dual Relation Transformer Network with feature erasure and contrastive learning for multi-label image classification","authors":"Wei Zhou,&nbsp;Kang Lin,&nbsp;Zhijie Zheng,&nbsp;Dihu Chen,&nbsp;Tao Su,&nbsp;Haifeng Hu","doi":"10.1016/j.neunet.2025.107309","DOIUrl":"10.1016/j.neunet.2025.107309","url":null,"abstract":"<div><div>The objective of multi-label image classification (MLIC) task is to simultaneously identify multiple objects present in an image. Several researchers directly flatten 2D feature maps into 1D grid feature sequences, and utilize Transformer encoder to capture the correlations of grid features to learn object relationships. Although obtaining promising results, these Transformer-based methods lose spatial information. In addition, current attention-based models often focus only on salient feature regions, but ignore other potential useful features that contribute to MLIC task. To tackle these problems, we present a novel <strong>D</strong>ual <strong>R</strong>elation <strong>T</strong>ransformer <strong>N</strong>etwork (<strong>DRTN</strong>) for MLIC task, which can be trained in an end-to-end manner. Concretely, to compensate for the loss of spatial information of grid features resulting from the flattening operation, we adopt a grid aggregation scheme to generate pseudo-region features, which does not need to make additional expensive annotations to train object detector. Then, a new dual relation enhancement (DRE) module is proposed to capture correlations between objects using two different visual features, thereby complementing the advantages provided by both grid and pseudo-region features. After that, we design a new feature enhancement and erasure (FEE) module to learn discriminative features and mine additional potential valuable features. By using attention mechanism to discover the most salient feature regions and removing them with region-level erasure strategy, our FEE module is able to mine other potential useful features from the remaining parts. Further, we devise a novel contrastive learning (CL) module to encourage the foregrounds of salient and potential features to be closer, while pushing their foregrounds further away from background features. This manner compels our model to learn discriminative and valuable features more comprehensively. Extensive experiments demonstrate that DRTN method surpasses current MLIC models on three challenging benchmarks, <em>i.e.</em>, MS-COCO 2014, PASCAL VOC 2007, and NUS-WIDE datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107309"},"PeriodicalIF":6.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning of conjugated visual representations through higher-order motion flows
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1016/j.neunet.2025.107296
Simone Marullo , Matteo Tiezzi , Marco Gori , Stefano Melacci
Learning with neural networks from a continuous stream of visual information presents several challenges due to the non-i.i.d. nature of the data. However, it also offers novel opportunities to develop representations that are consistent with the information flow. In this paper we investigate the case of unsupervised continual learning of pixel-wise features subject to multiple motion-induced constraints, therefore named motion-conjugated feature representations. Differently from existing approaches, motion is not a given signal (either ground-truth or estimated by external modules), but is the outcome of a progressive and autonomous learning process, occurring at various levels of the feature hierarchy. Multiple motion flows are estimated with neural networks and characterized by different levels of abstractions, spanning from traditional optical flow to other latent signals originating from higher-level features, hence called higher-order motions. Continuously learning to develop consistent multi-order flows and representations is prone to trivial solutions, which we counteract by introducing a self-supervised contrastive loss, spatially-aware and based on flow-induced similarity. We assess our model on photorealistic synthetic streams and real-world videos, comparing to pre-trained state-of-the art feature extractors (also based on Transformers) and to recent unsupervised learning models, significantly outperforming these alternatives.
{"title":"Continual learning of conjugated visual representations through higher-order motion flows","authors":"Simone Marullo ,&nbsp;Matteo Tiezzi ,&nbsp;Marco Gori ,&nbsp;Stefano Melacci","doi":"10.1016/j.neunet.2025.107296","DOIUrl":"10.1016/j.neunet.2025.107296","url":null,"abstract":"<div><div>Learning with neural networks from a continuous stream of visual information presents several challenges due to the non-i.i.d. nature of the data. However, it also offers novel opportunities to develop representations that are consistent with the information flow. In this paper we investigate the case of unsupervised continual learning of pixel-wise features subject to multiple motion-induced constraints, therefore named <em>motion-conjugated feature representations</em>. Differently from existing approaches, motion is not a given signal (either ground-truth or estimated by external modules), but is the outcome of a progressive and autonomous learning process, occurring at various levels of the feature hierarchy. Multiple motion flows are estimated with neural networks and characterized by different levels of abstractions, spanning from traditional optical flow to other latent signals originating from higher-level features, hence called higher-order motions. Continuously learning to develop consistent multi-order flows and representations is prone to trivial solutions, which we counteract by introducing a self-supervised contrastive loss, spatially-aware and based on flow-induced similarity. We assess our model on photorealistic synthetic streams and real-world videos, comparing to pre-trained state-of-the art feature extractors (also based on Transformers) and to recent unsupervised learning models, significantly outperforming these alternatives.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107296"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrivCore: Multiplication-activation co-reduction for efficient private inference
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1016/j.neunet.2025.107307
Zhi Pang, Lina Wang, Fangchao Yu, Kai Zhao, Bo Zeng, Shuwang Xu
The marriage of deep neural network (DNN) and secure 2-party computation (2PC) enables private inference (PI) on the encrypted client-side data and server-side models with both privacy and accuracy guarantees, coming at the cost of orders of magnitude communication and latency penalties. Prior works on designing PI-friendly network architectures are confined to mitigating the overheads associated with non-linear (e.g., ReLU) operations, assuming other linear computations are free. Recent works have shown that linear convolutions can no longer be ignored and are responsible for the majority of communication in PI protocols. In this work, we present PrivCore, a framework that jointly optimizes the alternating linear and non-linear DNN operators via a careful co-design of sparse Winograd convolution and fine-grained activation reduction, to improve high-efficiency ciphertext computation without impacting the inference precision. Specifically, being aware of the incompatibility between the spatial pruning and Winograd convolution, we propose a two-tiered Winograd-aware structured pruning method that removes spatial filters and Winograd vectors from coarse to fine-grained for multiplication reduction, both of which are specifically optimized for Winograd convolution in a structured pattern. PrivCore further develops a novel sensitivity-based differentiable activation approximation to automate the selection of ineffectual ReLUs and polynomial options. PrivCore also supports the dynamic determination of coefficient-adaptive polynomial replacement to mitigate the accuracy degradation. Extensive experiments on various models and datasets consistently validate the effectiveness of PrivCore, achieving 2.2× communication reduction with 1.8% higher accuracy compared with SENet (ICLR 2023) on CIFAR-100, and 2.0× total communication reduction with iso-accuracy compared with CoPriv (NeurIPS 2023) on ImageNet.
{"title":"PrivCore: Multiplication-activation co-reduction for efficient private inference","authors":"Zhi Pang,&nbsp;Lina Wang,&nbsp;Fangchao Yu,&nbsp;Kai Zhao,&nbsp;Bo Zeng,&nbsp;Shuwang Xu","doi":"10.1016/j.neunet.2025.107307","DOIUrl":"10.1016/j.neunet.2025.107307","url":null,"abstract":"<div><div>The marriage of deep neural network (DNN) and secure 2-party computation (2PC) enables private inference (PI) on the encrypted client-side data and server-side models with both privacy and accuracy guarantees, coming at the cost of orders of magnitude communication and latency penalties. Prior works on designing PI-friendly network architectures are confined to mitigating the overheads associated with non-linear (e.g., ReLU) operations, assuming other linear computations are free. Recent works have shown that linear convolutions can no longer be ignored and are responsible for the majority of communication in PI protocols. In this work, we present <span>PrivCore</span>, a framework that jointly optimizes the alternating linear and non-linear DNN operators via a careful co-design of sparse Winograd convolution and fine-grained activation reduction, to improve high-efficiency ciphertext computation without impacting the inference precision. Specifically, being aware of the incompatibility between the spatial pruning and Winograd convolution, we propose a two-tiered Winograd-aware structured pruning method that removes spatial filters and Winograd vectors from coarse to fine-grained for multiplication reduction, both of which are specifically optimized for Winograd convolution in a structured pattern. <span>PrivCore</span> further develops a novel sensitivity-based differentiable activation approximation to automate the selection of ineffectual ReLUs and polynomial options. <span>PrivCore</span> also supports the dynamic determination of coefficient-adaptive polynomial replacement to mitigate the accuracy degradation. Extensive experiments on various models and datasets consistently validate the effectiveness of <span>PrivCore</span>, achieving <span><math><mrow><mn>2</mn><mo>.</mo><mn>2</mn><mo>×</mo></mrow></math></span> communication reduction with 1.8% higher accuracy compared with SENet (ICLR 2023) on CIFAR-100, and <span><math><mrow><mn>2</mn><mo>.</mo><mn>0</mn><mo>×</mo></mrow></math></span> total communication reduction with iso-accuracy compared with CoPriv (NeurIPS 2023) on ImageNet.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107307"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1