首页 > 最新文献

Neural Networks最新文献

英文 中文
Language-based reasoning graph neural network for commonsense question answering 基于语言的常识性问题解答推理图神经网络。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106816
Meng Yang, Yihao Wang, Yu Gu
Language model (LM) has played an increasingly important role in the common-sense understanding and reasoning in the CSQA task (Common Sense Question Answering). However, due to the amount of model parameters, increasing training data helps little in further improving model performance. Introducing external knowledge through graph neural networks (GNNs) proves positive in boosting performance, but exploiting different knowledge sources and capturing contextual information between text and knowledge inside remains a challenge. In this paper, we propose LBR-GNN, a Language-Based Reasoning Graph Neural Network method to address these problems, by representing the question with each answer and external knowledge using a language model and predicting the reasoning score with a designed language-based GNN. Our LBR-GNN will first regulate external knowledge into a consistent textual form and encode it using a standard LM to capture the contextual information. Then, we build a graph neural network using the encoded information, especially the language-level edge representation. Finally, we design a novel edge aggregation method to select the edge information for GNN update and the language-guided GNN reasoning. We assess the performance of LBR-GNN across the CommonsenseQA, CommonsenseQA-IH, and OpenBookQA datasets. Our evaluation reveals a performance boost of more than 5% compared to the state-of-the-art methods on the CSQA dataset, achieved with a similar number of additional parameters.
语言模型(LM)在 CSQA 任务(常识问题解答)的常识理解和推理中发挥着越来越重要的作用。然而,由于模型参数量大,增加训练数据对进一步提高模型性能帮助不大。事实证明,通过图神经网络(GNN)引入外部知识对提高性能有积极作用,但如何利用不同的知识源并捕捉文本与内部知识之间的上下文信息仍是一个挑战。本文提出了一种基于语言的推理图神经网络(LBR-GNN)方法来解决这些问题,即用语言模型表示问题、每个答案和外部知识,并用设计的基于语言的图神经网络预测推理得分。我们的 LBR-GNN 首先会将外部知识规范为一致的文本形式,并使用标准 LM 对其进行编码,以捕捉上下文信息。然后,我们利用编码信息,尤其是语言级边缘表示,构建图神经网络。最后,我们设计了一种新颖的边缘聚合方法,用于为 GNN 更新和语言引导的 GNN 推理选择边缘信息。我们在 CommonsenseQA、CommonsenseQA-IH 和 OpenBookQA 数据集上评估了 LBR-GNN 的性能。我们的评估结果表明,与 CSQA 数据集上的先进方法相比,LBR-GNN 的性能提高了 5%以上,而且是在附加参数数量相近的情况下实现的。
{"title":"Language-based reasoning graph neural network for commonsense question answering","authors":"Meng Yang,&nbsp;Yihao Wang,&nbsp;Yu Gu","doi":"10.1016/j.neunet.2024.106816","DOIUrl":"10.1016/j.neunet.2024.106816","url":null,"abstract":"<div><div>Language model (LM) has played an increasingly important role in the common-sense understanding and reasoning in the CSQA task (Common Sense Question Answering). However, due to the amount of model parameters, increasing training data helps little in further improving model performance. Introducing external knowledge through graph neural networks (GNNs) proves positive in boosting performance, but exploiting different knowledge sources and capturing contextual information between text and knowledge inside remains a challenge. In this paper, we propose LBR-GNN, a <strong>L</strong>anguage-<strong>B</strong>ased <strong>R</strong>easoning <strong>G</strong>raph <strong>N</strong>eural <strong>N</strong>etwork method to address these problems, by representing the question with each answer and external knowledge using a language model and predicting the reasoning score with a designed language-based GNN. Our LBR-GNN will first regulate external knowledge into a consistent textual form and encode it using a standard LM to capture the contextual information. Then, we build a graph neural network using the encoded information, especially the language-level edge representation. Finally, we design a novel edge aggregation method to select the edge information for GNN update and the language-guided GNN reasoning. We assess the performance of LBR-GNN across the CommonsenseQA, CommonsenseQA-IH, and OpenBookQA datasets. Our evaluation reveals a performance boost of more than 5% compared to the state-of-the-art methods on the CSQA dataset, achieved with a similar number of additional parameters.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106816"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GMNI: Achieve good data augmentation in unsupervised graph contrastive learning GMNI: 在无监督图对比学习中实现良好的数据扩增效果
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106804
Xin Xiong , Xiangyu Wang , Suorong Yang , Furao Shen , Jian Zhao
Graph contrastive learning (GCL) shows excellent potential in unsupervised graph representation learning. Data augmentation (DA), responsible for generating diverse views, plays a vital role in GCL, and its optimal choice heavily depends on the downstream task. However, it is impossible to measure task-relevant information under an unsupervised setting. Therefore, many GCL methods risk insufficient information by failing to preserve essential information necessary for the downstream task or risk encoding redundant information. In this paper, we propose a novel method called Minimal Noteworthy Information for unsupervised Graph contrastive learning (GMNI), featuring automated DA. It achieves good DA by balancing missing and excessive information, approximating the optimal views in contrastive learning. We employ an adversarial training strategy to generate views that share minimal noteworthy information (MNI), reducing nuisance information by minimization optimization and ensuring sufficient information by emphasizing noteworthy information. Besides, we introduce randomness based on MNI to augmentation, thereby enhancing view diversity and stabilizing the model against perturbations. Extensive experiments on unsupervised and semi-supervised learning over 14 datasets demonstrate the superiority of GMNI over GCL methods with automated and manual DA. GMNI achieves up to a 1.64% improvement over the state-of-the-art in unsupervised node classification, up to a 1.97% improvement in unsupervised graph classification, and up to a 3.57% improvement in semi-supervised graph classification.
图形对比学习(GCL)在无监督图形表示学习中显示出了巨大的潜力。数据增强(DA)负责生成不同的视图,在 GCL 中起着至关重要的作用,其最佳选择在很大程度上取决于下游任务。然而,在无监督环境下,无法测量与任务相关的信息。因此,许多 GCL 方法都存在信息不足的风险,因为它们无法保留下游任务所需的基本信息,或者存在编码冗余信息的风险。在本文中,我们提出了一种名为 "用于无监督图形对比学习(GMNI)的最小值得注意信息 "的新方法,其特点是自动评估。它通过平衡缺失信息和过多信息来实现良好的DA,近似于对比学习中的最优视图。我们采用对抗训练策略来生成共享最小值得注意信息(MNI)的视图,通过最小化优化来减少干扰信息,并通过强调值得注意的信息来确保足够的信息。此外,我们还引入了基于 MNI 的随机性增强,从而提高了视图的多样性,并使模型在受到扰动时保持稳定。在 14 个数据集上进行的无监督和半监督学习的广泛实验证明,GMNI 在自动和手动 DA 方面优于 GCL 方法。在无监督节点分类中,GMNI 比最先进的方法最多提高了 1.64%;在无监督图分类中,GMNI 最多提高了 1.97%;在半监督图分类中,GMNI 最多提高了 3.57%。
{"title":"GMNI: Achieve good data augmentation in unsupervised graph contrastive learning","authors":"Xin Xiong ,&nbsp;Xiangyu Wang ,&nbsp;Suorong Yang ,&nbsp;Furao Shen ,&nbsp;Jian Zhao","doi":"10.1016/j.neunet.2024.106804","DOIUrl":"10.1016/j.neunet.2024.106804","url":null,"abstract":"<div><div>Graph contrastive learning (GCL) shows excellent potential in unsupervised graph representation learning. Data augmentation (DA), responsible for generating diverse views, plays a vital role in GCL, and its optimal choice heavily depends on the downstream task. However, it is impossible to measure task-relevant information under an unsupervised setting. Therefore, many GCL methods risk insufficient information by failing to preserve essential information necessary for the downstream task or risk encoding redundant information. In this paper, we propose a novel method called <u>M</u>inimal <u>N</u>oteworthy <u>I</u>nformation for unsupervised <u>G</u>raph contrastive learning (GMNI), featuring automated DA. It achieves good DA by balancing missing and excessive information, approximating the optimal views in contrastive learning. We employ an adversarial training strategy to generate views that share minimal noteworthy information (MNI), reducing nuisance information by minimization optimization and ensuring sufficient information by emphasizing noteworthy information. Besides, we introduce randomness based on MNI to augmentation, thereby enhancing view diversity and stabilizing the model against perturbations. Extensive experiments on unsupervised and semi-supervised learning over 14 datasets demonstrate the superiority of GMNI over GCL methods with automated and manual DA. GMNI achieves up to a 1.64% improvement over the state-of-the-art in unsupervised node classification, up to a 1.97% improvement in unsupervised graph classification, and up to a 3.57% improvement in semi-supervised graph classification.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106804"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty guided semi-supervised few-shot segmentation with prototype level fusion 带有原型级融合的不确定性引导半监督少镜头分割技术
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106802
Hailing Wang , Chunwei Wu , Hai Zhang , Guitao Cao , Wenming Cao
Few-Shot Semantic Segmentation (FSS) aims to tackle the challenge of segmenting novel categories with limited annotated data. However, given the diversity among support-query pairs, transferring meta-knowledge to unseen categories poses a significant challenge, particularly in scenarios featuring substantial intra-class variance within an episode task. To alleviate this issue, we propose the Uncertainty Guided Adaptive Prototype Network (UGAPNet) for semi-supervised few-shot semantic segmentation. The key innovation lies in the generation of reliable pseudo-prototypes as an additional supplement to alleviate intra-class semantic bias. Specifically, we employ a shared meta-learner to produce segmentation results for unlabeled images in the pseudo-label prediction module. Subsequently, we incorporate an uncertainty estimation module to quantify the difference between prototypes extracted from query and support images, facilitating pseudo-label denoising. Utilizing these refined pseudo-label samples, we introduce a prototype rectification module to obtain effective pseudo-prototypes and generate a generalized adaptive prototype for the segmentation of query images. Furthermore, generalized few-shot semantic segmentation extends the paradigm of few-shot semantic segmentation by simultaneously segmenting both unseen and seen classes during evaluation. To address the challenge of confusion region prediction between these two categories, we further propose a novel Prototype-Level Fusion Strategy in the prototypical contrastive space. Extensive experiments conducted on two benchmarks demonstrate the effectiveness of the proposed UGAPNet and prototype-level fusion strategy. Our source code will be available on https://github.com/WHL182/UGAPNet.
少量语义分割(FSS)旨在解决利用有限的注释数据分割新类别的难题。然而,考虑到支持-查询对之间的多样性,将元知识转移到未见过的类别是一个巨大的挑战,尤其是在一集任务中存在大量类内差异的情况下。为了缓解这一问题,我们提出了用于半监督式少量语义分割的不确定性引导自适应原型网络(UGAPNet)。其关键创新在于生成可靠的伪原型,作为减轻类内语义偏差的额外补充。具体来说,我们采用共享元学习器,在伪标签预测模块中生成无标签图像的分割结果。随后,我们加入了不确定性估计模块,以量化从查询图像和支持图像中提取的原型之间的差异,从而促进伪标签去噪。利用这些细化的伪标签样本,我们引入了原型矫正模块,以获得有效的伪原型,并生成用于分割查询图像的广义自适应原型。此外,广义少镜头语义分割扩展了少镜头语义分割的范式,在评估过程中同时分割未见类和已见类。为了解决这两个类别之间混淆区域预测的难题,我们进一步提出了一种新颖的原型对比空间原型级融合策略。在两个基准上进行的广泛实验证明了所提出的 UGAPNet 和原型级融合策略的有效性。我们的源代码将发布在 https://github.com/WHL182/UGAPNet 上。
{"title":"Uncertainty guided semi-supervised few-shot segmentation with prototype level fusion","authors":"Hailing Wang ,&nbsp;Chunwei Wu ,&nbsp;Hai Zhang ,&nbsp;Guitao Cao ,&nbsp;Wenming Cao","doi":"10.1016/j.neunet.2024.106802","DOIUrl":"10.1016/j.neunet.2024.106802","url":null,"abstract":"<div><div>Few-Shot Semantic Segmentation (FSS) aims to tackle the challenge of segmenting novel categories with limited annotated data. However, given the diversity among support-query pairs, transferring meta-knowledge to unseen categories poses a significant challenge, particularly in scenarios featuring substantial intra-class variance within an episode task. To alleviate this issue, we propose the Uncertainty Guided Adaptive Prototype Network (UGAPNet) for semi-supervised few-shot semantic segmentation. The key innovation lies in the generation of reliable pseudo-prototypes as an additional supplement to alleviate intra-class semantic bias. Specifically, we employ a shared meta-learner to produce segmentation results for unlabeled images in the pseudo-label prediction module. Subsequently, we incorporate an uncertainty estimation module to quantify the difference between prototypes extracted from query and support images, facilitating pseudo-label denoising. Utilizing these refined pseudo-label samples, we introduce a prototype rectification module to obtain effective pseudo-prototypes and generate a generalized adaptive prototype for the segmentation of query images. Furthermore, generalized few-shot semantic segmentation extends the paradigm of few-shot semantic segmentation by simultaneously segmenting both unseen and seen classes during evaluation. To address the challenge of confusion region prediction between these two categories, we further propose a novel Prototype-Level Fusion Strategy in the prototypical contrastive space. Extensive experiments conducted on two benchmarks demonstrate the effectiveness of the proposed UGAPNet and prototype-level fusion strategy. Our source code will be available on <span><span>https://github.com/WHL182/UGAPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106802"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized deep learning networks for accurate identification of cancer cells in bone marrow 优化深度学习网络,准确识别骨髓中的癌细胞
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106822
Venkatachalam Kandasamy , Vladimir Simic , Nebojsa Bacanin , Dragan Pamucar
Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification—and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.
放射科医生利用 X 射线、磁共振成像或计算机断层扫描的照片来诊断骨癌。人工方法耗费大量人力,而且可能需要专业知识。因此,创建一个区分恶性和健康骨骼的自动化流程至关重要。罹患癌症的骨骼与未受影响区域的骨骼质地不同。血液病的诊断依赖于对骨髓中的有核细胞进行正确标记和分类。然而,病理学家需要对标本进行鉴定,而鉴定工作既敏感又耗时,这阻碍了及时诊断和治疗。人工智能(尤其是机器学习和深度学习)的发展极大地增强了人类评估和识别这些更复杂疾病的能力。相反,要提高癌细胞的识别能力并降低误报率,还需要进行大量的研究和开发。为了解决这个问题,我们建立了一个用于形态分析的深度学习模型。本文介绍了一种新型深度卷积神经网络架构,其中使用了混合多目标和基于类别的优化算法来自适应优化超参数。将处理过的细胞图片作为输入,然后用优化的基于注意力的多尺度卷积神经网络对所提出的模型进行训练,以识别骨髓中癌细胞的种类。我们在公开数据集上进行了广泛的实验,并使用一系列性能指标对实验结果进行了测量和评估。与已经训练过的深度学习模型相比,确定总准确率为 99.7%,更胜一筹。
{"title":"Optimized deep learning networks for accurate identification of cancer cells in bone marrow","authors":"Venkatachalam Kandasamy ,&nbsp;Vladimir Simic ,&nbsp;Nebojsa Bacanin ,&nbsp;Dragan Pamucar","doi":"10.1016/j.neunet.2024.106822","DOIUrl":"10.1016/j.neunet.2024.106822","url":null,"abstract":"<div><div>Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification—and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106822"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiLSTM-Filt: Neural network for radar word segmentation BiLSTM-Filt:雷达词分割神经网络
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106815
Yurui Zhao , Xiang Wang , Zhitao Huang
Radar word extraction is the analysis foundation for multi-function radars (MFRs) in electronic intelligence (ELINT). Although neural networks enhance performance in radar word extraction, current research still faces challenges from complex electromagnetic environments and unknown radar words. Therefore, in this paper, we propose a promising two-stage radar word extraction framework, consisting of segmentation and recognition. To fill the vacancy of radar word segmentation, we establish the mathematical model from the time series analysis viewpoint and design a novel segmentation neural network based on Bi-direction Long Short-Term Memory with a filter module (BiLSTM-Filt). Specific radar word structure characteristics are extracted by training the network and applied for detecting radar words in the pulse train. To further improve segmentation performance, a bounding box regression method is designed to merge information from sub-region structures. Simulation experiments on a typical MFR, Mercury, reveal that the proposed method can outperform the baseline methods within complex electromagnetic environments, containing corrupted environments, various pulse backgrounds, and variable pulse train lengths. Due to the artificial design structure, the proposed method can also make a trial on unknown radar word segmentation.
雷达词提取是电子情报(ELINT)中多功能雷达(MFR)的分析基础。尽管神经网络提高了雷达词提取的性能,但目前的研究仍面临着复杂电磁环境和未知雷达词带来的挑战。因此,我们在本文中提出了一种很有前景的两阶段雷达词提取框架,包括分割和识别。为了填补雷达词分割的空白,我们从时间序列分析的角度建立了数学模型,并设计了一种基于双向长短期记忆与滤波模块(BiLSTM-Filt)的新型分割神经网络。通过训练该网络,提取出特定的雷达字结构特征,并将其用于检测脉冲序列中的雷达字。为进一步提高分段性能,设计了一种边界框回归方法,以合并来自子区域结构的信息。在典型的 MFR--"水星 "上进行的仿真实验表明,所提出的方法在复杂的电磁环境中(包括损坏的环境、各种脉冲背景和不同的脉冲序列长度)的性能优于基线方法。由于采用了人工设计结构,所提出的方法还能对未知雷达字分割进行试验。
{"title":"BiLSTM-Filt: Neural network for radar word segmentation","authors":"Yurui Zhao ,&nbsp;Xiang Wang ,&nbsp;Zhitao Huang","doi":"10.1016/j.neunet.2024.106815","DOIUrl":"10.1016/j.neunet.2024.106815","url":null,"abstract":"<div><div>Radar word extraction is the analysis foundation for multi-function radars (MFRs) in electronic intelligence (ELINT). Although neural networks enhance performance in radar word extraction, current research still faces challenges from complex electromagnetic environments and unknown radar words. Therefore, in this paper, we propose a promising two-stage radar word extraction framework, consisting of segmentation and recognition. To fill the vacancy of radar word segmentation, we establish the mathematical model from the time series analysis viewpoint and design a novel segmentation neural network based on Bi-direction Long Short-Term Memory with a filter module (BiLSTM-Filt). Specific radar word structure characteristics are extracted by training the network and applied for detecting radar words in the pulse train. To further improve segmentation performance, a bounding box regression method is designed to merge information from sub-region structures. Simulation experiments on a typical MFR, Mercury, reveal that the proposed method can outperform the baseline methods within complex electromagnetic environments, containing corrupted environments, various pulse backgrounds, and variable pulse train lengths. Due to the artificial design structure, the proposed method can also make a trial on unknown radar word segmentation.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106815"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic meta-graph convolutional recurrent network for heterogeneous spatiotemporal graph forecasting 用于异构时空图预测的动态元图卷积递归网络
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.neunet.2024.106805
Xianwei Guo , Zhiyong Yu , Fangwan Huang , Xing Chen , Dingqi Yang , Jiangtao Wang
Spatiotemporal Graph (STG) forecasting is an essential task within the realm of spatiotemporal data mining and urban computing. Over the past few years, Spatiotemporal Graph Neural Networks (STGNNs) have gained significant attention as promising solutions for STG forecasting. However, existing methods often overlook two issues: the dynamic spatial dependencies of urban networks and the heterogeneity of urban spatiotemporal data. In this paper, we propose a novel framework for STG learning called Dynamic Meta-Graph Convolutional Recurrent Network (DMetaGCRN), which effectively tackles both challenges. Specifically, we first build a meta-graph generator to dynamically generate graph structures, which integrates various dynamic features, including input sensor signals and their historical trends, periodic information (timestamp embeddings), and meta-node embeddings. Among them, a memory network is used to guide the learning of meta-node embeddings. The meta-graph generation process enables the model to simulate the dynamic spatial dependencies of urban networks and capture data heterogeneity. Then, we design a Dynamic Meta-Graph Convolutional Recurrent Unit (DMetaGCRU) to simultaneously model spatial and temporal dependencies. Finally, we formulate the proposed DMetaGCRN in an encoder–decoder architecture built upon DMetaGCRU and meta-graph generator components. Extensive experiments on four real-world urban spatiotemporal datasets validate that the proposed DMetaGCRN framework outperforms state-of-the-art approaches.
时空图(STG)预测是时空数据挖掘和城市计算领域的一项重要任务。在过去几年中,时空图神经网络(STGNNs)作为有前途的 STG 预测解决方案受到了广泛关注。然而,现有方法往往忽略了两个问题:城市网络的动态空间依赖性和城市时空数据的异质性。在本文中,我们提出了一种新的 STG 学习框架,称为动态元图卷积递归网络(DMetaGCRN),它能有效地解决这两个难题。具体来说,我们首先构建了一个元图生成器来动态生成图结构,该生成器集成了各种动态特征,包括输入传感器信号及其历史趋势、周期信息(时间戳嵌入)和元节点嵌入。其中,记忆网络用于指导元节点嵌入的学习。元图生成过程可使模型模拟城市网络的动态空间依赖关系,并捕捉数据的异质性。然后,我们设计了一个动态元图卷积递归单元(DMetaGCRU),以同时模拟空间和时间依赖关系。最后,我们在基于 DMetaGCRU 和元图生成器组件的编码器-解码器架构中制定了拟议的 DMetaGCRN。在四个真实世界的城市时空数据集上进行的广泛实验验证了所提出的 DMetaGCRN 框架优于最先进的方法。
{"title":"Dynamic meta-graph convolutional recurrent network for heterogeneous spatiotemporal graph forecasting","authors":"Xianwei Guo ,&nbsp;Zhiyong Yu ,&nbsp;Fangwan Huang ,&nbsp;Xing Chen ,&nbsp;Dingqi Yang ,&nbsp;Jiangtao Wang","doi":"10.1016/j.neunet.2024.106805","DOIUrl":"10.1016/j.neunet.2024.106805","url":null,"abstract":"<div><div>Spatiotemporal Graph (STG) forecasting is an essential task within the realm of spatiotemporal data mining and urban computing. Over the past few years, Spatiotemporal Graph Neural Networks (STGNNs) have gained significant attention as promising solutions for STG forecasting. However, existing methods often overlook two issues: the dynamic spatial dependencies of urban networks and the heterogeneity of urban spatiotemporal data. In this paper, we propose a novel framework for STG learning called Dynamic Meta-Graph Convolutional Recurrent Network (DMetaGCRN), which effectively tackles both challenges. Specifically, we first build a meta-graph generator to dynamically generate graph structures, which integrates various dynamic features, including input sensor signals and their historical trends, periodic information (timestamp embeddings), and meta-node embeddings. Among them, a memory network is used to guide the learning of meta-node embeddings. The meta-graph generation process enables the model to simulate the dynamic spatial dependencies of urban networks and capture data heterogeneity. Then, we design a Dynamic Meta-Graph Convolutional Recurrent Unit (DMetaGCRU) to simultaneously model spatial and temporal dependencies. Finally, we formulate the proposed DMetaGCRN in an encoder–decoder architecture built upon DMetaGCRU and meta-graph generator components. Extensive experiments on four real-world urban spatiotemporal datasets validate that the proposed DMetaGCRN framework outperforms state-of-the-art approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106805"},"PeriodicalIF":6.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model design and exponential state estimation for discrete-time delayed memristive spiking neural P systems 离散时间延迟记忆尖峰神经 P 系统的模型设计和指数状态估计
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1016/j.neunet.2024.106801
Nijing Yang , Hong Peng , Jun Wang , Xiang Lu , Antonio Ramírez-de-Arellano , Xiangxiang Wang , Yongbin Yu
This paper investigates the exponential state estimation of the discrete-time memristive spiking neural P system (MSNPS). The spiking neural P system (SNPS) offers algorithmic support for neural morphology computation and AI chips, boasting advantages such as high performance and efficiency. As a new type of information device, memristors have efficient computing characteristics that integrate memory and computation, and can serve as synapses in SNPS. Therefore, to leverage the combined benefits of SNPS and memristors, this study introduces an innovative MSNPS circuit design, where memristors substitute resistors in the SNPS framework. Meanwhile, MSNPS mathematical model is constructed based on circuit model. In order to be more practical, the time delays in the system are analyzed in addition to the discretization of the continuous MSNPS. Moreover, some sufficient conditions for exponential state estimation are established by utilizing a Lyapunov functional to MSNPS. Finally, a numerical simulation example is constructed to validate the main findings.
本文研究了离散时间忆阻尖峰神经P系统(MSNPS)的指数状态估计。尖峰神经P系统(SNPS)为神经形态计算和人工智能芯片提供了算法支持,具有高性能、高效率等优势。作为一种新型信息设备,忆阻器具有高效的计算特性,可将记忆与计算融为一体,并可在 SNPS 中充当突触。因此,为了充分利用 SNPS 和忆阻器的综合优势,本研究引入了一种创新的 MSNPS 电路设计,在 SNPS 框架中用忆阻器替代电阻器。同时,基于电路模型构建了 MSNPS 数学模型。为了更加实用,除了对连续 MSNPS 进行离散化外,还对系统中的时间延迟进行了分析。此外,通过利用 MSNPS 的 Lyapunov 函数,建立了指数状态估计的一些充分条件。最后,构建了一个数值模拟实例来验证主要结论。
{"title":"Model design and exponential state estimation for discrete-time delayed memristive spiking neural P systems","authors":"Nijing Yang ,&nbsp;Hong Peng ,&nbsp;Jun Wang ,&nbsp;Xiang Lu ,&nbsp;Antonio Ramírez-de-Arellano ,&nbsp;Xiangxiang Wang ,&nbsp;Yongbin Yu","doi":"10.1016/j.neunet.2024.106801","DOIUrl":"10.1016/j.neunet.2024.106801","url":null,"abstract":"<div><div>This paper investigates the exponential state estimation of the discrete-time memristive spiking neural P system (MSNPS). The spiking neural P system (SNPS) offers algorithmic support for neural morphology computation and AI chips, boasting advantages such as high performance and efficiency. As a new type of information device, memristors have efficient computing characteristics that integrate memory and computation, and can serve as synapses in SNPS. Therefore, to leverage the combined benefits of SNPS and memristors, this study introduces an innovative MSNPS circuit design, where memristors substitute resistors in the SNPS framework. Meanwhile, MSNPS mathematical model is constructed based on circuit model. In order to be more practical, the time delays in the system are analyzed in addition to the discretization of the continuous MSNPS. Moreover, some sufficient conditions for exponential state estimation are established by utilizing a Lyapunov functional to MSNPS. Finally, a numerical simulation example is constructed to validate the main findings.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106801"},"PeriodicalIF":6.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-field millimeter-wave and visible image fusion via transfer learning 通过迁移学习实现近场毫米波与可见光图像融合。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1016/j.neunet.2024.106799
Ming Ye, Yitong Li, Di Wu, Xifeng Li, Dongjie Bi, Yongle Xie
To facilitate penetrating-imaging oriented applications such as nondestructive internal defect detection and localization under obstructed environment, a novel pixel-level information fusion strategy for mmWave and visible images is proposed. More concretely, inspired by both the advancement of deep learning on universal image fusion and the maturity of near-field millimeter wave imaging technology, an effective deep transfer learning strategy is presented to capture the information hidden in visible and millimeter wave images. Furthermore, by implementing fine-tuning strategy and by using an improved bilateral filter, the proposed fusion strategy can robustly exploit the information in both the near-field millimeter wave field and the visual light field. Extensive experiments imply that the proposed strategy can provide superior performance in terms of accuracy and robustness under real-world environment.
为了促进以穿透成像为导向的应用,如无损内部缺陷检测和障碍环境下的定位,提出了一种新颖的毫米波和可见光图像像素级信息融合策略。更具体地说,受深度学习在通用图像融合方面的进步和近场毫米波成像技术成熟的启发,提出了一种有效的深度迁移学习策略,以捕捉隐藏在可见光和毫米波图像中的信息。此外,通过实施微调策略和使用改进的双边滤波器,所提出的融合策略可以稳健地利用近场毫米波场和视觉光场中的信息。广泛的实验表明,所提出的策略能在真实环境下提供卓越的精度和鲁棒性。
{"title":"Near-field millimeter-wave and visible image fusion via transfer learning","authors":"Ming Ye,&nbsp;Yitong Li,&nbsp;Di Wu,&nbsp;Xifeng Li,&nbsp;Dongjie Bi,&nbsp;Yongle Xie","doi":"10.1016/j.neunet.2024.106799","DOIUrl":"10.1016/j.neunet.2024.106799","url":null,"abstract":"<div><div>To facilitate penetrating-imaging oriented applications such as nondestructive internal defect detection and localization under obstructed environment, a novel pixel-level information fusion strategy for mmWave and visible images is proposed. More concretely, inspired by both the advancement of deep learning on universal image fusion and the maturity of near-field millimeter wave imaging technology, an effective deep transfer learning strategy is presented to capture the information hidden in visible and millimeter wave images. Furthermore, by implementing fine-tuning strategy and by using an improved bilateral filter, the proposed fusion strategy can robustly exploit the information in both the near-field millimeter wave field and the visual light field. Extensive experiments imply that the proposed strategy can provide superior performance in terms of accuracy and robustness under real-world environment.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106799"},"PeriodicalIF":6.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DyGraphformer: Transformer combining dynamic spatio-temporal graph network for multivariate time series forecasting DyGraphformer:结合动态时空图网络的转换器,用于多变量时间序列预测。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1016/j.neunet.2024.106776
Shuo Han , Yaling Xun , Jianghui Cai , Haifeng Yang , Yanfeng Li
Transformer-based models demonstrate tremendous potential for Multivariate Time Series (MTS) forecasting due to their ability to capture long-term temporal dependencies by using the self-attention mechanism. However, effectively modeling the spatial correlation cross series for MTS is a challenge for Transformer. Although Graph Neural Networks (GNN) are competent for modeling spatial dependencies across series, existing methods are based on the assumption of static relationships between variables, which do not align with the time-varying spatial dependencies in real-world series. Therefore, we propose DyGraphformer, which integrates graph convolution into Transformer to assist Transformer in effectively modeling spatial dependencies, while also dynamically inferring time-varying spatial dependencies by combining historical spatial information. In DyGraphformer, decoder module involving complex recursion is abandoned to accelerate model execution. First, the input is embedded using DSW (Dimension Segment Wise) through integrating its position and node level embedding to preserve temporal and spatial information. Then, the time self-attention layer and dynamic graph convolutional layer are constructed to capture temporal dependencies and spatial dependencies of multivariate time series, respectively. The dynamic graph convolutional layer utilizes Gated Recurrent Unit (GRU) to obtain historical spatial dependencies, and integrates the series features of the current time to perform graph structure inference in multiple subspaces. Specifically, to fully utilize the spatio-temporal information at different scales, DyGraphformer performs hierarchical encoder learning for the final forecasting. Extensive experimental results on seven real-world datasets demonstrate DyGraphformer outperforms state-of-the-art baseline methods, with comparisons including Transformer-based and GNN-based methods.
基于变换器的模型能够利用自注意机制捕捉长期时间依赖关系,因此在多变量时间序列(MTS)预测方面具有巨大潜力。然而,如何有效地为 MTS 的空间相关交叉序列建模是 Transformer 面临的一项挑战。虽然图形神经网络(GNN)可以对跨序列的空间依赖性进行建模,但现有方法都是基于变量间静态关系的假设,这与现实世界序列中随时间变化的空间依赖性并不一致。因此,我们提出了 DyGraphformer,它将图卷积集成到 Transformer 中,帮助 Transformer 有效地建立空间依赖关系模型,同时结合历史空间信息动态推断时变空间依赖关系。在 DyGraphformer 中,为了加速模型的执行,放弃了涉及复杂递归的解码器模块。首先,使用 DSW(Dimension Segment Wise)嵌入输入,通过整合其位置和节点级嵌入来保留时间和空间信息。然后,构建时间自注意层和动态图卷积层,分别捕捉多变量时间序列的时间依赖性和空间依赖性。动态图卷积层利用门控递归单元(GRU)获取历史空间依赖关系,并整合当前时间的序列特征,在多个子空间中进行图结构推断。具体来说,为了充分利用不同尺度的时空信息,DyGraphformer 对最终预测进行了分层编码器学习。在七个真实世界数据集上的大量实验结果表明,DyGraphformer 的性能优于最先进的基线方法,比较对象包括基于变换器和基于 GNN 的方法。
{"title":"DyGraphformer: Transformer combining dynamic spatio-temporal graph network for multivariate time series forecasting","authors":"Shuo Han ,&nbsp;Yaling Xun ,&nbsp;Jianghui Cai ,&nbsp;Haifeng Yang ,&nbsp;Yanfeng Li","doi":"10.1016/j.neunet.2024.106776","DOIUrl":"10.1016/j.neunet.2024.106776","url":null,"abstract":"<div><div>Transformer-based models demonstrate tremendous potential for Multivariate Time Series (MTS) forecasting due to their ability to capture long-term temporal dependencies by using the self-attention mechanism. However, effectively modeling the spatial correlation cross series for MTS is a challenge for Transformer. Although Graph Neural Networks (GNN) are competent for modeling spatial dependencies across series, existing methods are based on the assumption of static relationships between variables, which do not align with the time-varying spatial dependencies in real-world series. Therefore, we propose DyGraphformer, which integrates graph convolution into Transformer to assist Transformer in effectively modeling spatial dependencies, while also dynamically inferring time-varying spatial dependencies by combining historical spatial information. In DyGraphformer, decoder module involving complex recursion is abandoned to accelerate model execution. First, the input is embedded using DSW (Dimension Segment Wise) through integrating its position and node level embedding to preserve temporal and spatial information. Then, the time self-attention layer and dynamic graph convolutional layer are constructed to capture temporal dependencies and spatial dependencies of multivariate time series, respectively. The dynamic graph convolutional layer utilizes Gated Recurrent Unit (GRU) to obtain historical spatial dependencies, and integrates the series features of the current time to perform graph structure inference in multiple subspaces. Specifically, to fully utilize the spatio-temporal information at different scales, DyGraphformer performs hierarchical encoder learning for the final forecasting. Extensive experimental results on seven real-world datasets demonstrate DyGraphformer outperforms state-of-the-art baseline methods, with comparisons including Transformer-based and GNN-based methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106776"},"PeriodicalIF":6.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Graph Aggregation Transformer for image captioning 用于图像标题的多模式图聚合转换器
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neunet.2024.106813
Lizhi Chen , Kesen Li
The current image captioning directly encodes the detected target area and recognizes the objects in the image to correctly describe the image. However, it is unreliable to make full use of regional features because they cannot convey contextual information, such as the relationship between objects and the lack of object predicate level semantics. An effective model should contain multiple modes and explore their interactions to help understand the image. Therefore, we introduce the Multi-Modal Graph Aggregation Transformer (MMGAT), which uses the information of various image modes to fill this gap. It first represents an image as a graph consisting of three sub-graphs, depicting context grid, region, and semantic text modalities respectively. Then, we introduce three aggregators that guide message passing from one graph to another to exploit context in different modalities, so as to refine the features of nodes. The updated nodes have better features for image captioning. We show significant performance scores of 144.6% CIDEr on MS-COCO and 80.3% CIDEr on Flickr30k compared to state of the arts, and conduct a rigorous analysis to demonstrate the importance of each part of our design.
目前的图像字幕直接对检测到的目标区域进行编码,并识别图像中的物体,从而正确描述图像。然而,由于区域特征无法传达上下文信息,如物体之间的关系,以及缺乏物体谓词级语义,因此充分利用区域特征并不可靠。一个有效的模型应该包含多种模式,并探索它们之间的相互作用,以帮助理解图像。因此,我们引入了多模式图聚合转换器(MMGAT),利用各种图像模式的信息来填补这一空白。它首先将图像表示为由三个子图组成的图,分别描述上下文网格、区域和语义文本模式。然后,我们引入三个聚合器,引导信息从一个图传递到另一个图,以利用不同模式的上下文,从而完善节点的特征。更新后的节点具有更好的图像标题特征。与现有技术相比,我们在 MS-COCO 和 Flickr30k 上分别取得了 144.6% 和 80.3% 的显著性能分数,并进行了严格的分析,以证明我们设计的每个部分的重要性。
{"title":"Multi-Modal Graph Aggregation Transformer for image captioning","authors":"Lizhi Chen ,&nbsp;Kesen Li","doi":"10.1016/j.neunet.2024.106813","DOIUrl":"10.1016/j.neunet.2024.106813","url":null,"abstract":"<div><div>The current image captioning directly encodes the detected target area and recognizes the objects in the image to correctly describe the image. However, it is unreliable to make full use of regional features because they cannot convey contextual information, such as the relationship between objects and the lack of object predicate level semantics. An effective model should contain multiple modes and explore their interactions to help understand the image. Therefore, we introduce the Multi-Modal Graph Aggregation Transformer (MMGAT), which uses the information of various image modes to fill this gap. It first represents an image as a graph consisting of three sub-graphs, depicting context grid, region, and semantic text modalities respectively. Then, we introduce three aggregators that guide message passing from one graph to another to exploit context in different modalities, so as to refine the features of nodes. The updated nodes have better features for image captioning. We show significant performance scores of 144.6% CIDEr on MS-COCO and 80.3% CIDEr on Flickr30k compared to state of the arts, and conduct a rigorous analysis to demonstrate the importance of each part of our design.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106813"},"PeriodicalIF":6.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1