首页 > 最新文献

Neural Processing Letters最新文献

英文 中文
Finding Efficient Graph Embeddings and Processing them by a CNN-based Tool 通过基于 CNN 的工具寻找高效图嵌入并对其进行处理
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1007/s11063-024-11683-0
Attila Tiba, Andras Hajdu, Tamas Giraszi

We introduce new tools to support finding efficient graph embedding techniques for graph databases and to process their outputs using deep learning for classification scenarios. Accordingly, we investigate the possibility of creating an ensemble of different graph embedding methods to raise accuracy and present an interconnected neural network-based ensemble to increase the efficiency of the member classification algorithms. We also introduce a new convolutional neural network-based architecture that can be generally proposed to process vectorized graph data provided by various graph embedding methods and compare it with other architectures in the literature to show the competitiveness of our approach. We also exhibit a statistical-based inhomogeneity level estimation procedure to select the optimal embedding for a given graph database efficiently. The efficiency of our framework is exhaustively tested using several publicly available graph datasets and numerous state-of-the-art graph embedding techniques. Our experimental results for classification tasks have proved the competitiveness of our approach by outperforming the state-of-the-art frameworks.

我们引入了新的工具,以支持为图数据库寻找高效的图嵌入技术,并使用深度学习处理其输出,用于分类场景。因此,我们研究了创建不同图嵌入方法集合以提高准确性的可能性,并提出了一种基于神经网络的互联集合,以提高成员分类算法的效率。我们还介绍了一种基于卷积神经网络的新架构,该架构一般可用于处理各种图嵌入方法提供的矢量化图数据,并将其与文献中的其他架构进行比较,以显示我们的方法具有竞争力。我们还展示了一种基于统计的不均匀性水平估计程序,可为给定的图数据库高效地选择最佳嵌入。我们使用多个公开的图数据集和众多最先进的图嵌入技术对我们框架的效率进行了详尽的测试。分类任务的实验结果证明,我们的方法优于最先进的框架,具有很强的竞争力。
{"title":"Finding Efficient Graph Embeddings and Processing them by a CNN-based Tool","authors":"Attila Tiba, Andras Hajdu, Tamas Giraszi","doi":"10.1007/s11063-024-11683-0","DOIUrl":"https://doi.org/10.1007/s11063-024-11683-0","url":null,"abstract":"<p>We introduce new tools to support finding efficient graph embedding techniques for graph databases and to process their outputs using deep learning for classification scenarios. Accordingly, we investigate the possibility of creating an ensemble of different graph embedding methods to raise accuracy and present an interconnected neural network-based ensemble to increase the efficiency of the member classification algorithms. We also introduce a new convolutional neural network-based architecture that can be generally proposed to process vectorized graph data provided by various graph embedding methods and compare it with other architectures in the literature to show the competitiveness of our approach. We also exhibit a statistical-based inhomogeneity level estimation procedure to select the optimal embedding for a given graph database efficiently. The efficiency of our framework is exhaustively tested using several publicly available graph datasets and numerous state-of-the-art graph embedding techniques. Our experimental results for classification tasks have proved the competitiveness of our approach by outperforming the state-of-the-art frameworks.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"3 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Artificial Neural Network with a Cultural Algorithm 用文化算法训练人工神经网络
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1007/s11063-024-11636-7
Kübra Tümay Ateş, İbrahim Erdem Kalkan, Cenk Şahin

Artificial neural networks are amongst the artificial intelligence techniques with their ability to provide machines with some functionalities such as decision making, comparison, and forecasting. They are known for having the capability of forecasting issues in real-world problems. Their acquired knowledge is stored in the interconnection strengths or weights of neurons through an optimization system known as learning. Several limitations have been identified with commonly used gradient-based optimization algorithms, including the risk of premature convergence, the sensitivity of initial parameters and positions, and the potential for getting trapped in local optima. Various meta-heuristics are proposed in the literature as alternative training algorithms to mitigate these limitations. Therefore, the primary aim of this study is to combine a feed-forward artificial neural network (ANN) with a cultural algorithm (CA) as a meta-heuristic, aiming to establish an efficient and dependable training system in comparison to existing methods. The proposed artificial neural network system (ANN-CA) evaluated its performance on classification tasks over nine benchmark datasets: Iris, Pima Indians Diabetes, Thyroid Disease, Breast Cancer Wisconsin, Credit Approval, Glass Identification, SPECT Heart, Wine and Balloon. The overall experimental results indicate that the proposed method outperforms other methods included in the comparative analysis by approximately 12% in terms of classification error and approximately 7% in terms of accuracy.

人工神经网络是人工智能技术之一,能够为机器提供决策、比较和预测等功能。众所周知,人工神经网络具有预测现实世界问题的能力。通过被称为 "学习 "的优化系统,它们获得的知识存储在神经元的互连强度或权重中。常用的基于梯度的优化算法存在一些局限性,包括过早收敛的风险、初始参数和位置的敏感性以及陷入局部最优的可能性。文献中提出了各种元启发式算法作为替代训练算法,以缓解这些局限性。因此,本研究的主要目的是将前馈式人工神经网络(ANN)与作为元启发式的文化算法(CA)相结合,旨在建立一个与现有方法相比高效可靠的训练系统。所提出的人工神经网络系统(ANN-CA)对九个基准数据集的分类任务进行了性能评估:这九个基准数据集是:虹膜、皮马印第安人糖尿病、甲状腺疾病、威斯康星州乳腺癌、信贷审批、玻璃识别、SPECT 心脏、葡萄酒和气球。总体实验结果表明,所提出的方法在分类误差方面优于比较分析中的其他方法约 12%,在准确率方面优于其他方法约 7%。
{"title":"Training Artificial Neural Network with a Cultural Algorithm","authors":"Kübra Tümay Ateş, İbrahim Erdem Kalkan, Cenk Şahin","doi":"10.1007/s11063-024-11636-7","DOIUrl":"https://doi.org/10.1007/s11063-024-11636-7","url":null,"abstract":"<p>Artificial neural networks are amongst the artificial intelligence techniques with their ability to provide machines with some functionalities such as decision making, comparison, and forecasting. They are known for having the capability of forecasting issues in real-world problems. Their acquired knowledge is stored in the interconnection strengths or weights of neurons through an optimization system known as learning. Several limitations have been identified with commonly used gradient-based optimization algorithms, including the risk of premature convergence, the sensitivity of initial parameters and positions, and the potential for getting trapped in local optima. Various meta-heuristics are proposed in the literature as alternative training algorithms to mitigate these limitations. Therefore, the primary aim of this study is to combine a feed-forward artificial neural network (ANN) with a cultural algorithm (CA) as a meta-heuristic, aiming to establish an efficient and dependable training system in comparison to existing methods. The proposed artificial neural network system (ANN-CA) evaluated its performance on classification tasks over nine benchmark datasets: Iris, Pima Indians Diabetes, Thyroid Disease, Breast Cancer Wisconsin, Credit Approval, Glass Identification, SPECT Heart, Wine and Balloon. The overall experimental results indicate that the proposed method outperforms other methods included in the comparative analysis by approximately 12% in terms of classification error and approximately 7% in terms of accuracy.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"44 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lagrange Stability of Competitive Neural Networks with Multiple Time-Varying Delays 具有多重时变延迟的竞争神经网络的拉格朗日稳定性
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-26 DOI: 10.1007/s11063-024-11667-0
Dandan Tang, Baoxian Wang, Jigui Jian, Caiqing Hao

In this paper, the Lagrange stability of competitive neural networks (CNNs) with leakage delays and mixed time-varying delays is investigated. By constructing delay-dependent Lyapunov functional, combining inequality analysis technique, the delay-dependent Lagrange stability criterion are obtained in the form of linear matrix inequalities. And the corresponding global exponentially attractive set (GEAS) is obtained. On this basis, by exploring the relationship between the leakage delays and the discrete delay, a better GEAS of the system is obtained from the six different sizes of the two types of delays. Finally, three examples of numerical simulation are given to illustrate the effectiveness of the obtained results.

本文研究了具有泄漏延迟和混合时变延迟的竞争神经网络(CNN)的拉格朗日稳定性。通过构建依赖于延迟的 Lyapunov 函数,结合不等式分析技术,以线性矩阵不等式的形式得到了依赖于延迟的拉格朗日稳定性准则。并得到相应的全局指数吸引力集(GEAS)。在此基础上,通过探索泄漏延迟与离散延迟之间的关系,从六种不同大小的两类延迟中得到了较好的系统 GEAS。最后,给出了三个数值模拟实例,以说明所获结果的有效性。
{"title":"Lagrange Stability of Competitive Neural Networks with Multiple Time-Varying Delays","authors":"Dandan Tang, Baoxian Wang, Jigui Jian, Caiqing Hao","doi":"10.1007/s11063-024-11667-0","DOIUrl":"https://doi.org/10.1007/s11063-024-11667-0","url":null,"abstract":"<p>In this paper, the Lagrange stability of competitive neural networks (CNNs) with leakage delays and mixed time-varying delays is investigated. By constructing delay-dependent Lyapunov functional, combining inequality analysis technique, the delay-dependent Lagrange stability criterion are obtained in the form of linear matrix inequalities. And the corresponding global exponentially attractive set (GEAS) is obtained. On this basis, by exploring the relationship between the leakage delays and the discrete delay, a better GEAS of the system is obtained from the six different sizes of the two types of delays. Finally, three examples of numerical simulation are given to illustrate the effectiveness of the obtained results.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"58 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Hybrid Deep Learning Models for Enhanced Multivariate Time Series Forecasting 利用混合深度学习模型加强多变量时间序列预测
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 DOI: 10.1007/s11063-024-11656-3
Amal Mahmoud, Ammar Mohammed

Time series forecasting is crucial in various domains, ranging from finance and economics to weather prediction and supply chain management. Traditional statistical methods and machine learning models have been widely used for this task. However, they often face limitations in capturing complex temporal dependencies and handling multivariate time series data. In recent years, deep learning models have emerged as a promising solution for overcoming these limitations. This paper investigates how deep learning, specifically hybrid models, can enhance time series forecasting and address the shortcomings of traditional approaches. This dual capability handles intricate variable interdependencies and non-stationarities in multivariate forecasting. Our results show that the hybrid models achieved lower error rates and higher (R^2) values, signifying their superior predictive performance and generalization capabilities. These architectures effectively extract spatial features and temporal dynamics in multivariate time series by combining convolutional and recurrent modules. This study evaluates deep learning models, specifically hybrid architectures, for multivariate time series forecasting. On two real-world datasets - Traffic Volume and Air Quality - the TCN-BiLSTM model achieved the best overall performance. For Traffic Volume, the TCN-BiLSTM model achieved an (R^2) score of 0.976, and for Air Quality, it reached an (R^2) score of 0.94. These results highlight the model’s effectiveness in leveraging the strengths of Temporal Convolutional Networks (TCNs) for capturing multi-scale temporal patterns and Bidirectional Long Short-Term Memory (BiLSTMs) for retaining contextual information, thereby enhancing the accuracy of time series forecasting.

从金融和经济到天气预测和供应链管理,时间序列预测在各个领域都至关重要。传统的统计方法和机器学习模型已被广泛应用于这项任务。然而,它们在捕捉复杂的时间依赖性和处理多变量时间序列数据方面往往面临局限。近年来,深度学习模型已成为克服这些局限性的一种有前途的解决方案。本文研究了深度学习(特别是混合模型)如何增强时间序列预测并解决传统方法的不足。这种双重能力可处理多变量预测中错综复杂的变量相互依赖关系和非平稳性。我们的研究结果表明,混合模型实现了更低的错误率和更高的(R^2)值,这表明它们具有卓越的预测性能和泛化能力。这些架构通过结合卷积和递归模块,有效地提取了多变量时间序列中的空间特征和时间动态。本研究评估了用于多变量时间序列预测的深度学习模型,特别是混合架构。在交通流量和空气质量这两个真实世界数据集上,TCN-BiLSTM 模型取得了最佳的整体性能。在交通量数据集上,TCN-BiLSTM 模型的 R^2 得分为 0.976,在空气质量数据集上,它的 R^2 得分为 0.94。这些结果凸显了该模型在利用时序卷积网络(TCN)捕捉多尺度时间模式和双向长短期记忆(BiLSTM)保留上下文信息的优势方面的有效性,从而提高了时间序列预测的准确性。
{"title":"Leveraging Hybrid Deep Learning Models for Enhanced Multivariate Time Series Forecasting","authors":"Amal Mahmoud, Ammar Mohammed","doi":"10.1007/s11063-024-11656-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11656-3","url":null,"abstract":"<p>Time series forecasting is crucial in various domains, ranging from finance and economics to weather prediction and supply chain management. Traditional statistical methods and machine learning models have been widely used for this task. However, they often face limitations in capturing complex temporal dependencies and handling multivariate time series data. In recent years, deep learning models have emerged as a promising solution for overcoming these limitations. This paper investigates how deep learning, specifically hybrid models, can enhance time series forecasting and address the shortcomings of traditional approaches. This dual capability handles intricate variable interdependencies and non-stationarities in multivariate forecasting. Our results show that the hybrid models achieved lower error rates and higher <span>(R^2)</span> values, signifying their superior predictive performance and generalization capabilities. These architectures effectively extract spatial features and temporal dynamics in multivariate time series by combining convolutional and recurrent modules. This study evaluates deep learning models, specifically hybrid architectures, for multivariate time series forecasting. On two real-world datasets - Traffic Volume and Air Quality - the TCN-BiLSTM model achieved the best overall performance. For Traffic Volume, the TCN-BiLSTM model achieved an <span>(R^2)</span> score of 0.976, and for Air Quality, it reached an <span>(R^2)</span> score of 0.94. These results highlight the model’s effectiveness in leveraging the strengths of Temporal Convolutional Networks (TCNs) for capturing multi-scale temporal patterns and Bidirectional Long Short-Term Memory (BiLSTMs) for retaining contextual information, thereby enhancing the accuracy of time series forecasting.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"49 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese Tracking Network with Multi-attention Mechanism 具有多重关注机制的连体追踪网络
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 DOI: 10.1007/s11063-024-11670-5
Yuzhuo Xu, Ting Li, Bing Zhu, Fasheng Wang, Fuming Sun

Object trackers based on Siamese networks view tracking as a similarity-matching process. However, the correlation operation operates as a local linear matching process, limiting the tracker’s ability to capture the intricate nonlinear relationship between the template and search region branches. Moreover, most trackers don’t update the template and often use the first frame of an image as the initial template, which will easily lead to poor tracking performance of the algorithm when facing instances of deformation, scale variation, and occlusion of the tracking target. To this end, we propose a Simases tracking network with a multi-attention mechanism, including a template branch and a search branch. To adapt to changes in target appearance, we integrate dynamic templates and multi-attention mechanisms in the template branch to obtain more effective feature representation by fusing the features of initial templates and dynamic templates. To enhance the robustness of the tracking model, we utilize a multi-attention mechanism in the search branch that shares weights with the template branch to obtain multi-scale feature representation by fusing search region features at different scales. In addition, we design a lightweight and simple feature fusion mechanism, in which the Transformer encoder structure is utilized to fuse the information of the template area and search area, and the dynamic template is updated online based on confidence. Experimental results on publicly tracking datasets show that the proposed method achieves competitive results compared to several state-of-the-art trackers.

基于连体网络的物体跟踪器将跟踪视为一个相似性匹配过程。然而,相关操作是一个局部线性匹配过程,限制了跟踪器捕捉模板和搜索区域分支之间错综复杂的非线性关系的能力。此外,大多数跟踪器不会更新模板,通常使用图像的第一帧作为初始模板,这就很容易导致算法在面对跟踪目标的变形、尺度变化和遮挡等情况时跟踪性能不佳。为此,我们提出了一种具有多注意机制的 Simases 跟踪网络,包括模板分支和搜索分支。为了适应目标外观的变化,我们在模板分支中整合了动态模板和多注意机制,通过融合初始模板和动态模板的特征来获得更有效的特征表示。为了增强跟踪模型的鲁棒性,我们在搜索分支中利用与模板分支共享权重的多注意机制,通过融合不同尺度的搜索区域特征来获得多尺度特征表示。此外,我们还设计了一种轻量级的简单特征融合机制,利用变换器编码器结构融合模板区域和搜索区域的信息,并根据置信度在线更新动态模板。在公开跟踪数据集上的实验结果表明,与几种最先进的跟踪器相比,所提出的方法取得了具有竞争力的结果。
{"title":"Siamese Tracking Network with Multi-attention Mechanism","authors":"Yuzhuo Xu, Ting Li, Bing Zhu, Fasheng Wang, Fuming Sun","doi":"10.1007/s11063-024-11670-5","DOIUrl":"https://doi.org/10.1007/s11063-024-11670-5","url":null,"abstract":"<p>Object trackers based on Siamese networks view tracking as a similarity-matching process. However, the correlation operation operates as a local linear matching process, limiting the tracker’s ability to capture the intricate nonlinear relationship between the template and search region branches. Moreover, most trackers don’t update the template and often use the first frame of an image as the initial template, which will easily lead to poor tracking performance of the algorithm when facing instances of deformation, scale variation, and occlusion of the tracking target. To this end, we propose a Simases tracking network with a multi-attention mechanism, including a template branch and a search branch. To adapt to changes in target appearance, we integrate dynamic templates and multi-attention mechanisms in the template branch to obtain more effective feature representation by fusing the features of initial templates and dynamic templates. To enhance the robustness of the tracking model, we utilize a multi-attention mechanism in the search branch that shares weights with the template branch to obtain multi-scale feature representation by fusing search region features at different scales. In addition, we design a lightweight and simple feature fusion mechanism, in which the Transformer encoder structure is utilized to fuse the information of the template area and search area, and the dynamic template is updated online based on confidence. Experimental results on publicly tracking datasets show that the proposed method achieves competitive results compared to several state-of-the-art trackers.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"41 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transfer-Learning-Like Neural Dynamics Algorithm for Arctic Sea Ice Extraction 用于北极海冰提取的类迁移学习神经动力学算法
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-14 DOI: 10.1007/s11063-024-11664-3
Bo Peng, Kefan Zhang, Long Jin, Mingsheng Shang

Sea ice plays a pivotal role in ocean-related research, necessitating the development of highly accurate and robust techniques for its extraction from diverse satellite remote sensing imagery. However, conventional learning methods face limitations due to the soaring cost and time associated with manually collecting sufficient sea ice data for model training. This paper introduces an innovative approach where Neural Dynamics (ND) algorithms are seamlessly integrated with a recurrent neural network, resulting in a Transfer-Learning-Like Neural Dynamics (TLLND) algorithm specifically tailored for sea ice extraction. Firstly, given the susceptibility of the image extraction process to noise in practical scenarios, an ND algorithm with noise tolerance and high extraction accuracy is proposed to address this challenge. Secondly, The internal coefficients of the ND algorithm are determined using a parametric method. Subsequently, the ND algorithm is formulated as a decoupled dynamical system. This enables the coefficients trained on a linear equation problem dataset to be directly generalized to solve the sea ice extraction challenges. Theoretical analysis ensures that the effectiveness of the proposed TLLND algorithm remains unaffected by the specific characteristics of various dataset. To validate its efficacy, robustness, and generalization performance, several comparative experiments are conducted using diverse Arctic sea ice satellite imagery with varying levels of noise. The outcomes of these experiments affirm the competence of the proposed TLLND algorithm in addressing the complexities associated with sea ice extraction.

海冰在与海洋有关的研究中发挥着举足轻重的作用,因此有必要开发高度精确和稳健的技术,从各种卫星遥感图像中提取海冰。然而,由于人工收集足够的海冰数据进行模型训练所需的成本和时间大幅增加,传统的学习方法面临着局限性。本文介绍了一种创新方法,将神经动力学(ND)算法与递归神经网络无缝集成,形成了专为海冰提取量身定制的类转移学习神经动力学(TLLND)算法。首先,考虑到图像提取过程在实际应用中易受噪声影响,提出了一种具有噪声容限和高提取精度的 ND 算法来应对这一挑战。其次,利用参数法确定 ND 算法的内部系数。随后,ND 算法被表述为一个解耦动态系统。这样,在线性方程问题数据集上训练的系数就可以直接用于解决海冰提取难题。理论分析确保了所提出的 TLLND 算法的有效性不受各种数据集具体特征的影响。为了验证该算法的有效性、鲁棒性和泛化性能,我们使用不同噪声水平的北极海冰卫星图像进行了多项对比实验。这些实验结果肯定了所提出的 TLLND 算法在解决与海冰提取相关的复杂问题方面的能力。
{"title":"A Transfer-Learning-Like Neural Dynamics Algorithm for Arctic Sea Ice Extraction","authors":"Bo Peng, Kefan Zhang, Long Jin, Mingsheng Shang","doi":"10.1007/s11063-024-11664-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11664-3","url":null,"abstract":"<p>Sea ice plays a pivotal role in ocean-related research, necessitating the development of highly accurate and robust techniques for its extraction from diverse satellite remote sensing imagery. However, conventional learning methods face limitations due to the soaring cost and time associated with manually collecting sufficient sea ice data for model training. This paper introduces an innovative approach where Neural Dynamics (ND) algorithms are seamlessly integrated with a recurrent neural network, resulting in a Transfer-Learning-Like Neural Dynamics (TLLND) algorithm specifically tailored for sea ice extraction. Firstly, given the susceptibility of the image extraction process to noise in practical scenarios, an ND algorithm with noise tolerance and high extraction accuracy is proposed to address this challenge. Secondly, The internal coefficients of the ND algorithm are determined using a parametric method. Subsequently, the ND algorithm is formulated as a decoupled dynamical system. This enables the coefficients trained on a linear equation problem dataset to be directly generalized to solve the sea ice extraction challenges. Theoretical analysis ensures that the effectiveness of the proposed TLLND algorithm remains unaffected by the specific characteristics of various dataset. To validate its efficacy, robustness, and generalization performance, several comparative experiments are conducted using diverse Arctic sea ice satellite imagery with varying levels of noise. The outcomes of these experiments affirm the competence of the proposed TLLND algorithm in addressing the complexities associated with sea ice extraction.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"6 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain Adaptation 用于无监督领域适应的具有判别转移特征和实例选择功能的核极端学习机
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-13 DOI: 10.1007/s11063-024-11677-y
Shaofei Zang, Huimin Li, Nannan Lu, Chao Ma, Jiwei Gao, Jianwei Ma, Jinfeng Lv

The goal of domain adaptation (DA) is to develop a robust decision model on the source domain effectively generalize to the target domain data. State-of-the-art domain adaptation methods typically focus on finding an optimal inter-domain invariant feature representation or helpful instances from the source domain. In this paper, we propose a kernel extreme learning machine with discriminative transfer features and instance selection (KELM-DTF-IS) for unsupervised domain adaptation tasks, which consists of two steps: discriminative transfer feature extraction and classification with instance selection. At the feature extraction stage, we extend cross domain mean approximation(CDMA) by incorporating a penalty term and develop discriminative cross domain mean approximation (d-CDMA) to optimize the category separability between instances. Subsequently, d-CDMA is integrated into kernel ELM-AutoEncoder(KELM-AE) for extracting inter-domain invariant features. During the classification process, our approach uses CDMA metrics to compute a weights to each source instances based on their impact in reducing distribution differences between domains. Instances with a greater effect receive higher weights and vice versa. These weights are then used to distinguish and select source domain instances before incorporating them into weight KELM for proposing an adaptive classifier. Finally, we apply our approach to conduct classification experiments on publicly available domain adaptation datasets, and the results demonstrate its superiority over KELM and numerous other domain adaptation approaches.

域适应(DA)的目标是在源域上建立一个稳健的决策模型,并有效地泛化到目标域数据。最先进的域适应方法通常侧重于从源域中找到最佳的域间不变特征表示或有用实例。在本文中,我们针对无监督域适应任务提出了一种具有判别转移特征和实例选择功能的内核极端学习机(KELM-DTF-IS),它包括两个步骤:判别转移特征提取和实例选择分类。在特征提取阶段,我们通过加入惩罚项扩展了跨域均值近似(CDMA),并开发了判别跨域均值近似(d-CDMA),以优化实例之间的类别可分性。随后,d-CDMA 被集成到内核 ELM-AutoEncoder(KELM-AE)中,用于提取域间不变特征。在分类过程中,我们的方法使用 CDMA 指标来计算每个源实例的权重,权重基于它们对减少域间分布差异的影响。影响越大的实例权重越高,反之亦然。这些权重用于区分和选择源域实例,然后将它们纳入权重 KELM 以提出自适应分类器。最后,我们在公开的域适应数据集上应用我们的方法进行分类实验,结果表明它优于 KELM 和其他许多域适应方法。
{"title":"Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain Adaptation","authors":"Shaofei Zang, Huimin Li, Nannan Lu, Chao Ma, Jiwei Gao, Jianwei Ma, Jinfeng Lv","doi":"10.1007/s11063-024-11677-y","DOIUrl":"https://doi.org/10.1007/s11063-024-11677-y","url":null,"abstract":"<p>The goal of domain adaptation (DA) is to develop a robust decision model on the source domain effectively generalize to the target domain data. State-of-the-art domain adaptation methods typically focus on finding an optimal inter-domain invariant feature representation or helpful instances from the source domain. In this paper, we propose a kernel extreme learning machine with discriminative transfer features and instance selection (KELM-DTF-IS) for unsupervised domain adaptation tasks, which consists of two steps: discriminative transfer feature extraction and classification with instance selection. At the feature extraction stage, we extend cross domain mean approximation(CDMA) by incorporating a penalty term and develop discriminative cross domain mean approximation (d-CDMA) to optimize the category separability between instances. Subsequently, d-CDMA is integrated into kernel ELM-AutoEncoder(KELM-AE) for extracting inter-domain invariant features. During the classification process, our approach uses CDMA metrics to compute a weights to each source instances based on their impact in reducing distribution differences between domains. Instances with a greater effect receive higher weights and vice versa. These weights are then used to distinguish and select source domain instances before incorporating them into weight KELM for proposing an adaptive classifier. Finally, we apply our approach to conduct classification experiments on publicly available domain adaptation datasets, and the results demonstrate its superiority over KELM and numerous other domain adaptation approaches.\u0000</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"15 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Classification Based on Low-Level Feature Enhancement and Attention Mechanism 基于低级特征增强和注意机制的图像分类
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-13 DOI: 10.1007/s11063-024-11680-3
Yong Zhang, Xueqin Li, Wenyun Chen, Ying Zang

Deep learning-based image classification networks heavily rely on the extracted features. However, as the model becomes deeper, important features may be lost, resulting in decreased accuracy. To tackle this issue, this paper proposes an image classification method that enhances low-level features and incorporates an attention mechanism. The proposed method employs EfficientNet as the backbone network for feature extraction. Firstly, the Feature Enhancement Module quantifies and statistically processes low-level features from shallow layers, thereby enhancing the feature information. Secondly, the Convolutional Block Attention Module enhances the high-level features to improve the extraction of global features. Finally, the enhanced low-level features and global features are fused to supplement low-resolution global features with high-resolution details, further improving the model’s image classification ability. Experimental results illustrate that the proposed method achieves a Top-1 classification accuracy of 86.49% and a Top-5 classification accuracy of 96.90% on the ETH-Food101 dataset, 86.99% and 97.24% on the VireoFood-172 dataset, and 70.99% and 92.73% on the UEC-256 dataset. These results demonstrate that the proposed method outperforms existing methods in terms of classification performance.

基于深度学习的图像分类网络在很大程度上依赖于提取的特征。然而,随着模型的深入,重要的特征可能会丢失,从而导致准确率下降。为了解决这个问题,本文提出了一种增强底层特征并结合注意力机制的图像分类方法。该方法采用 EfficientNet 作为提取特征的骨干网络。首先,特征增强模块对来自浅层的低级特征进行量化和统计处理,从而增强特征信息。其次,卷积块注意模块会增强高层特征,从而改进全局特征的提取。最后,将增强的低层特征与全局特征融合,以高分辨率细节补充低分辨率全局特征,进一步提高模型的图像分类能力。实验结果表明,所提出的方法在 ETH-Food101 数据集上的 Top-1 分类准确率为 86.49%,Top-5 分类准确率为 96.90%;在 VireoFood-172 数据集上的分类准确率分别为 86.99% 和 97.24%;在 UEC-256 数据集上的分类准确率分别为 70.99% 和 92.73%。这些结果表明,所提出的方法在分类性能方面优于现有方法。
{"title":"Image Classification Based on Low-Level Feature Enhancement and Attention Mechanism","authors":"Yong Zhang, Xueqin Li, Wenyun Chen, Ying Zang","doi":"10.1007/s11063-024-11680-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11680-3","url":null,"abstract":"<p>Deep learning-based image classification networks heavily rely on the extracted features. However, as the model becomes deeper, important features may be lost, resulting in decreased accuracy. To tackle this issue, this paper proposes an image classification method that enhances low-level features and incorporates an attention mechanism. The proposed method employs EfficientNet as the backbone network for feature extraction. Firstly, the Feature Enhancement Module quantifies and statistically processes low-level features from shallow layers, thereby enhancing the feature information. Secondly, the Convolutional Block Attention Module enhances the high-level features to improve the extraction of global features. Finally, the enhanced low-level features and global features are fused to supplement low-resolution global features with high-resolution details, further improving the model’s image classification ability. Experimental results illustrate that the proposed method achieves a Top-1 classification accuracy of 86.49% and a Top-5 classification accuracy of 96.90% on the ETH-Food101 dataset, 86.99% and 97.24% on the VireoFood-172 dataset, and 70.99% and 92.73% on the UEC-256 dataset. These results demonstrate that the proposed method outperforms existing methods in terms of classification performance.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"9 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation 利用点云生成近表面采样改进神经辐射场
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-22 DOI: 10.1007/s11063-024-11654-5
Hye Bin Yoo, Hyun Min Han, Sung Soo Hwang, Il Yong Chun

Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and three different state-of-the-art NeRF methods. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.

神经辐射场(NeRF)是一种新兴的视图合成方法,它对三维(3D)空间中的点进行采样,并估计其存在和色彩概率。NeRF 的缺点是需要较长的训练时间,因为它需要对许多三维点进行采样。此外,如果采样点来自遮挡区域或物体不可能存在的空间,NeRF 的渲染质量就会下降。这些问题可以通过估计三维场景的几何形状来解决。本文提出了一种近表面采样框架来改善 NeRF 的渲染质量。为此,本文提出的方法利用训练集的深度图像来估计三维物体的表面,并只在估计表面附近进行采样。为了获取新颖视图上的深度信息,本文提出了一种三维点云生成方法和一种从点云投射深度的简单精炼方法。实验结果表明,与原始 NeRF 和三种最先进的 NeRF 方法相比,所提出的近表面采样 NeRF 框架能显著提高渲染质量。此外,利用所提出的近表面采样框架,可以大大加快 NeRF 模型的训练时间。
{"title":"Improving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation","authors":"Hye Bin Yoo, Hyun Min Han, Sung Soo Hwang, Il Yong Chun","doi":"10.1007/s11063-024-11654-5","DOIUrl":"https://doi.org/10.1007/s11063-024-11654-5","url":null,"abstract":"<p>Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and three different state-of-the-art NeRF methods. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"45 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDGCL: Graph Contrastive Learning Framework with Multiple Graph Diffusion Methods MDGCL:采用多种图形扩散方法的图形对比学习框架
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-13 DOI: 10.1007/s11063-024-11672-3
Yuqiang Li, Yi Zhang, Chun Liu

In recent years, some classical graph contrastive learning(GCL) frameworks have been proposed to address the problem of sparse labeling of graph data in the real world. However, in node classification tasks, there are two obvious problems with existing GCL frameworks: first, the stochastic augmentation methods they adopt lose a lot of semantic information; second, the local–local contrasting mode selected by most frameworks ignores the global semantic information of the original graph, which limits the node classification performance of these frameworks. To address the above problems, this paper proposes a novel graph contrastive learning framework, MDGCL, which introduces two graph diffusion methods, Markov and PPR, and a deterministic–stochastic data augmentation strategy while retaining the local–local contrasting mode. Specifically, before using the two stochastic augmentation methods (FeatureDrop and EdgeDrop), MDGCL first uses two deterministic augmentation methods (Markov diffusion and PPR diffusion) to perform data augmentation on the original graph to increase the semantic information, this step ensures subsequent stochastic augmentation methods do not lose too much semantic information. Meanwhile, the diffusion matrices carried by the augmented views contain global semantic information of the original graph, allowing the framework to utilize the global semantic information while retaining the local-local contrasting mode, which further enhances the node classification performance of the framework. We conduct extensive comparative experiments on multiple benchmark datasets, and the results show that MDGCL outperforms the representative baseline frameworks on node classification tasks. Among them, compared with COSTA, MDGCL’s node classification accuracy has been improved by 1.07% and 0.41% respectively on two representative datasets, Amazon-Photo and Coauthor-CS. In addition, we also conduct ablation experiments on two datasets, Cora and CiteSeer, to verify the effectiveness of each improvement work of our framework.

近年来,人们提出了一些经典的图对比学习(GCL)框架,以解决现实世界中图数据稀疏标注的问题。然而,在节点分类任务中,现有的 GCL 框架存在两个明显的问题:第一,它们采用的随机增强方法丢失了大量语义信息;第二,大多数框架选择的局部-局部对比模式忽略了原始图的全局语义信息,这限制了这些框架的节点分类性能。针对上述问题,本文提出了一种新型图对比学习框架--MDGCL,它在保留局部-局部对比模式的同时,引入了马尔可夫和PPR两种图扩散方法以及确定性-随机数据增强策略。具体来说,在使用两种随机扩增方法(FeatureDrop 和 EdgeDrop)之前,MDGCL 首先使用两种确定性扩增方法(Markov diffusion 和 PPR diffusion)对原始图进行数据扩增,以增加语义信息。同时,扩增视图所携带的扩散矩阵包含了原始图的全局语义信息,使得框架在保留局部-局部对比模式的同时利用了全局语义信息,进一步提高了框架的节点分类性能。我们在多个基准数据集上进行了广泛的对比实验,结果表明 MDGCL 在节点分类任务上的表现优于具有代表性的基线框架。其中,与 COSTA 相比,MDGCL 在 Amazon-Photo 和 Coauthor-CS 两个代表性数据集上的节点分类准确率分别提高了 1.07% 和 0.41%。此外,我们还在 Cora 和 CiteSeer 两个数据集上进行了消融实验,以验证框架各项改进工作的有效性。
{"title":"MDGCL: Graph Contrastive Learning Framework with Multiple Graph Diffusion Methods","authors":"Yuqiang Li, Yi Zhang, Chun Liu","doi":"10.1007/s11063-024-11672-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11672-3","url":null,"abstract":"<p>In recent years, some classical graph contrastive learning(GCL) frameworks have been proposed to address the problem of sparse labeling of graph data in the real world. However, in node classification tasks, there are two obvious problems with existing GCL frameworks: first, the stochastic augmentation methods they adopt lose a lot of semantic information; second, the local–local contrasting mode selected by most frameworks ignores the global semantic information of the original graph, which limits the node classification performance of these frameworks. To address the above problems, this paper proposes a novel graph contrastive learning framework, MDGCL, which introduces two graph diffusion methods, Markov and PPR, and a deterministic–stochastic data augmentation strategy while retaining the local–local contrasting mode. Specifically, before using the two stochastic augmentation methods (FeatureDrop and EdgeDrop), MDGCL first uses two deterministic augmentation methods (Markov diffusion and PPR diffusion) to perform data augmentation on the original graph to increase the semantic information, this step ensures subsequent stochastic augmentation methods do not lose too much semantic information. Meanwhile, the diffusion matrices carried by the augmented views contain global semantic information of the original graph, allowing the framework to utilize the global semantic information while retaining the local-local contrasting mode, which further enhances the node classification performance of the framework. We conduct extensive comparative experiments on multiple benchmark datasets, and the results show that MDGCL outperforms the representative baseline frameworks on node classification tasks. Among them, compared with COSTA, MDGCL’s node classification accuracy has been improved by 1.07% and 0.41% respectively on two representative datasets, Amazon-Photo and Coauthor-CS. In addition, we also conduct ablation experiments on two datasets, Cora and CiteSeer, to verify the effectiveness of each improvement work of our framework.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"19 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Processing Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1