首页 > 最新文献

Neurocomputing最新文献

英文 中文
Unsupervised fuzzy temporal knowledge graph entity alignment via joint fuzzy semantics learning and global structure learning 基于联合模糊语义学习和全局结构学习的无监督模糊时态知识图实体对齐
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129019
Jingni Song , Luyi Bai , Xuanxuan An , Longlong Zhou
Temporal Knowledge Graph Entity Alignment (TKGEA) aims to identify the equivalent entities between different Temporal Knowledge Graphs (TKGs), which is important to knowledge fusion. The current mainstream TKGEA models are supervised embedding-based models that rely on pre-aligned seeds and implicitly encode structural information into entity embedding space for identifying equivalent entities. To deal with the TKGs structural information, some models use Graph Neural Network (GNN) encoding. But they ignore the design of decoders, failing to fully leverage the TKGs structural information. In addition, they primarily focus on crisp TKGs with clear entity semantics. However, many real-world TKGs exhibit fuzzy semantics. This fuzzy information makes existing TKGEA models face the challenge of handling the fuzzy semantics when aligning the equivalent fuzzy entities. To solve the above problems, we propose a novel unsupervised Fuzzy Temporal Knowledge Graphs Entity Alignment (EA) framework that jointly performs Fuzzy Semantics Learning and Global Structure Learning, namely FTFS. In this framework, we convert the EA task into an unsupervised optimal transport task between two intra-graph matrices, eliminating the necessity for pre-aligned seeds and thereby avoiding intensive labor. Since we further consider the relation between graph structure and entities during the optimal-transport-based decoder module, it can make better use of the global structural information rather than simply encoding it implicitly into the embedding space. Moreover, unlike TKGEA models, which use binary classification to represent temporal relational facts, we introduce fuzzy semantics learning to embed membership degrees of fuzzy temporal relational facts. Extensive experiments on five FTKG datasets show that our unsupervised method is superior to the state-of-the-art EA methods.
时间知识图实体对齐(TKGEA)旨在识别不同时间知识图之间的等价实体,这对知识融合具有重要意义。目前主流的TKGEA模型是基于监督嵌入的模型,该模型依赖于预对齐种子,隐式地将结构信息编码到实体嵌入空间中以识别等效实体。为了处理TKGs结构信息,一些模型采用了图神经网络(GNN)编码。但他们忽略了解码器的设计,未能充分利用TKGs的结构信息。此外,他们主要关注具有清晰实体语义的清晰tkg。然而,许多现实世界的tkg表现出模糊语义。这种模糊信息使得现有的TKGEA模型在对齐等效模糊实体时面临模糊语义处理的挑战。为了解决上述问题,我们提出了一种新的无监督模糊时态知识图实体对齐(EA)框架,该框架联合执行模糊语义学习和全局结构学习,即FTFS。在此框架中,我们将EA任务转换为两个图内矩阵之间的无监督最优传输任务,从而消除了预先对齐种子的必要性,从而避免了密集的劳动。由于我们在基于最优传输的解码器模块中进一步考虑了图结构和实体之间的关系,因此它可以更好地利用全局结构信息,而不是简单地将其隐式编码到嵌入空间中。此外,与TKGEA模型使用二元分类来表示时间关系事实不同,我们引入模糊语义学习来嵌入模糊时间关系事实的隶属度。在5个FTKG数据集上进行的大量实验表明,我们的无监督方法优于最先进的EA方法。
{"title":"Unsupervised fuzzy temporal knowledge graph entity alignment via joint fuzzy semantics learning and global structure learning","authors":"Jingni Song ,&nbsp;Luyi Bai ,&nbsp;Xuanxuan An ,&nbsp;Longlong Zhou","doi":"10.1016/j.neucom.2024.129019","DOIUrl":"10.1016/j.neucom.2024.129019","url":null,"abstract":"<div><div>Temporal Knowledge Graph Entity Alignment (TKGEA) aims to identify the equivalent entities between different Temporal Knowledge Graphs (TKGs), which is important to knowledge fusion. The current mainstream TKGEA models are supervised embedding-based models that rely on pre-aligned seeds and implicitly encode structural information into entity embedding space for identifying equivalent entities. To deal with the TKGs structural information, some models use Graph Neural Network (GNN) encoding. But they ignore the design of decoders, failing to fully leverage the TKGs structural information. In addition, they primarily focus on crisp TKGs with clear entity semantics. However, many real-world TKGs exhibit fuzzy semantics. This fuzzy information makes existing TKGEA models face the challenge of handling the fuzzy semantics when aligning the equivalent fuzzy entities. To solve the above problems, we propose a novel unsupervised <u>F</u>uzzy <u>T</u>emporal Knowledge Graphs Entity Alignment (EA) framework that jointly performs <u>F</u>uzzy Semantics Learning and Global <u>S</u>tructure Learning, namely FTFS. In this framework, we convert the EA task into an unsupervised optimal transport task between two intra-graph matrices, eliminating the necessity for pre-aligned seeds and thereby avoiding intensive labor. Since we further consider the relation between graph structure and entities during the optimal-transport-based decoder module, it can make better use of the global structural information rather than simply encoding it implicitly into the embedding space. Moreover, unlike TKGEA models, which use binary classification to represent temporal relational facts, we introduce fuzzy semantics learning to embed membership degrees of fuzzy temporal relational facts. Extensive experiments on five FTKG datasets show that our unsupervised method is superior to the state-of-the-art EA methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129019"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-Wise Convolutions in Vision Transformers for efficient training on small datasets 用于小数据集有效训练的深度智能卷积视觉转换器
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.128998
Tianxiao Zhang , Wenju Xu , Bo Luo , Guanghui Wang
The Vision Transformer (ViT) leverages the Transformer’s encoder to capture global information by dividing images into patches and achieves superior performance across various computer vision tasks. However, the self-attention mechanism of ViT captures the global context from the outset, overlooking the inherent relationships between neighboring pixels in images or videos. Transformers mainly focus on global information while ignoring the fine-grained local details. Consequently, ViT lacks inductive bias during image or video dataset training. In contrast, convolutional neural networks (CNNs), with their reliance on local filters, possess an inherent inductive bias, making them more efficient and quicker to converge than ViT with less data. In this paper, we present a lightweight Depth-Wise Convolution module as a shortcut in ViT models, bypassing entire Transformer blocks to ensure the models capture both local and global information with minimal overhead. Additionally, we introduce two architecture variants, allowing the Depth-Wise Convolution modules to be applied to multiple Transformer blocks for parameter savings, and incorporating independent parallel Depth-Wise Convolution modules with different kernels to enhance the acquisition of local information. The proposed approach significantly boosts the performance of ViT models on image classification, object detection, and instance segmentation by a large margin, especially on small datasets, as evaluated on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet for image classification, and COCO for object detection and instance segmentation. The source code can be accessed at https://github.com/ZTX-100/Efficient_ViT_with_DW.
Vision Transformer (ViT)利用Transformer的编码器通过将图像分成小块来捕获全局信息,并在各种计算机视觉任务中实现卓越的性能。然而,ViT的自注意机制从一开始就捕获了全局上下文,忽略了图像或视频中相邻像素之间的内在关系。transformer主要关注全局信息,而忽略了细粒度的局部细节。因此,ViT在图像或视频数据集训练过程中缺乏归纳偏差。相比之下,卷积神经网络(cnn)依赖于局部滤波器,具有固有的归纳偏置,使其在数据较少的情况下比ViT更高效、更快地收敛。在本文中,我们提出了一个轻量级的深度智能卷积模块,作为ViT模型中的快捷方式,绕过整个Transformer块,以确保模型以最小的开销捕获本地和全局信息。此外,我们引入了两种架构变体,允许深度- wise卷积模块应用于多个Transformer块以节省参数,并结合具有不同内核的独立并行深度- wise卷积模块来增强局部信息的获取。通过对CIFAR-10、CIFAR-100、Tiny-ImageNet和ImageNet的图像分类和COCO的目标检测和实例分割进行评估,该方法显著提高了ViT模型在图像分类、目标检测和实例分割方面的性能,特别是在小数据集上。源代码可以在https://github.com/ZTX-100/Efficient_ViT_with_DW上访问。
{"title":"Depth-Wise Convolutions in Vision Transformers for efficient training on small datasets","authors":"Tianxiao Zhang ,&nbsp;Wenju Xu ,&nbsp;Bo Luo ,&nbsp;Guanghui Wang","doi":"10.1016/j.neucom.2024.128998","DOIUrl":"10.1016/j.neucom.2024.128998","url":null,"abstract":"<div><div>The Vision Transformer (ViT) leverages the Transformer’s encoder to capture global information by dividing images into patches and achieves superior performance across various computer vision tasks. However, the self-attention mechanism of ViT captures the global context from the outset, overlooking the inherent relationships between neighboring pixels in images or videos. Transformers mainly focus on global information while ignoring the fine-grained local details. Consequently, ViT lacks inductive bias during image or video dataset training. In contrast, convolutional neural networks (CNNs), with their reliance on local filters, possess an inherent inductive bias, making them more efficient and quicker to converge than ViT with less data. In this paper, we present a lightweight Depth-Wise Convolution module as a shortcut in ViT models, bypassing entire Transformer blocks to ensure the models capture both local and global information with minimal overhead. Additionally, we introduce two architecture variants, allowing the Depth-Wise Convolution modules to be applied to multiple Transformer blocks for parameter savings, and incorporating independent parallel Depth-Wise Convolution modules with different kernels to enhance the acquisition of local information. The proposed approach significantly boosts the performance of ViT models on image classification, object detection, and instance segmentation by a large margin, especially on small datasets, as evaluated on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet for image classification, and COCO for object detection and instance segmentation. The source code can be accessed at <span><span>https://github.com/ZTX-100/Efficient_ViT_with_DW</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128998"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neurodynamic approach with fixed-time convergence for complex-variable pseudo-monotone variational inequalities 复变伪单调变分不等式的神经动力学固定时间收敛方法
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.128988
Jinlan Zheng , Xingxing Ju , Naimin Zhang , Dongpo Xu
Based on Wirtinger calculus, this paper proposes a complex-valued projection neural network (CPNN) designed to address complex-variables variational inequality problems. The global convergence of the CPNN is established under the assumptions of pseudomonotonicity and Lipschitz continuity. We demonstrate that the CPNN achieves convergence within a fixed-time, which is unaffected by the initial conditions and converges towards the optimal solution of the constrained optimization problem. And this result is distinct from asymptotic or exponential convergence that depend on initial condition. Furthermore, the CPNN shows utility in tackling diverse related problems, encompassing variational inequalities, pseudo-convex optimization problems, linear and nonlinear complementarity problems, as well as linear and convex quadratic programming problems. The efficacy of the proposed CPNN is substantiated through numerical simulations.
基于Wirtinger微积分,提出了一种解决复杂变量变分不等式问题的复值投影神经网络(CPNN)。在伪单调性和Lipschitz连续性的假设下,建立了CPNN的全局收敛性。我们证明了CPNN在不受初始条件影响的固定时间内收敛,并收敛到约束优化问题的最优解。这个结果不同于依赖于初始条件的渐近收敛或指数收敛。此外,CPNN在处理各种相关问题方面显示出效用,包括变分不等式,伪凸优化问题,线性和非线性互补问题,以及线性和凸二次规划问题。通过数值模拟验证了该方法的有效性。
{"title":"A neurodynamic approach with fixed-time convergence for complex-variable pseudo-monotone variational inequalities","authors":"Jinlan Zheng ,&nbsp;Xingxing Ju ,&nbsp;Naimin Zhang ,&nbsp;Dongpo Xu","doi":"10.1016/j.neucom.2024.128988","DOIUrl":"10.1016/j.neucom.2024.128988","url":null,"abstract":"<div><div>Based on Wirtinger calculus, this paper proposes a complex-valued projection neural network (CPNN) designed to address complex-variables variational inequality problems. The global convergence of the CPNN is established under the assumptions of pseudomonotonicity and Lipschitz continuity. We demonstrate that the CPNN achieves convergence within a fixed-time, which is unaffected by the initial conditions and converges towards the optimal solution of the constrained optimization problem. And this result is distinct from asymptotic or exponential convergence that depend on initial condition. Furthermore, the CPNN shows utility in tackling diverse related problems, encompassing variational inequalities, pseudo-convex optimization problems, linear and nonlinear complementarity problems, as well as linear and convex quadratic programming problems. The efficacy of the proposed CPNN is substantiated through numerical simulations.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128988"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new visual-inertial odometry scheme for unmanned systems in unified framework of zeroing neural networks 一种统一归零神经网络框架下的无人系统视觉惯性里程计新方案
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129017
Dechao Chen , Jianan Jiang , Zhixiong Wang , Shuai Li
In recent years, multi-sensor fusion has gained significant attention from researchers and is used extensively in simultaneous localization and mapping (SLAM) applications, such as visual-inertial odometry (VIO). This technology primarily utilizes visual and odometry measurements for unmanned aerial vehicles (UAVs) to estimate their position, orientation, and environment. However, in most previous works, the input error data of sensors in the system were considered independent. To improve system precision and fully utilize sensor data, a new method called Multi-State Constraint Kalman Filter with NearSAC (MSCKF-NearSAC), based on the MSCKF, is proposed. This method eliminates outliers by limiting the range of selected points, which significantly improves the success rate of feature point matching in the front-end. Furthermore, the MSCKF-ZNN method is proposed for the back-end, and combines zeroing neural network (ZNN) (originated from the Hopfield-type neural network) and error state, resulting in an exponentially converging output trajectory error, thus improving the trajectory precision of the SLAM system. The proposed algorithms, MSCKF-NearSAC and MSCKF-ZNN, are used in the excellent work of the stereo multi-state constraint Kalman filter system (S-MSCKF). A plethora of comparison experiments, utilizing precise measurement and calibration techniques, are conducted on open-source datasets and real-world environments. Experimental results demonstrate that the introduced approach exhibits higher stability in contrast to other algorithms.
近年来,多传感器融合技术得到了研究人员的广泛关注,并广泛应用于视觉惯性里程计(VIO)等同步定位和地图绘制(SLAM)应用中。该技术主要利用无人机(uav)的视觉和里程测量来估计它们的位置、方向和环境。然而,在以往的工作中,系统中传感器的输入误差数据被认为是独立的。为了提高系统精度和充分利用传感器数据,在MSCKF的基础上,提出了一种新的基于NearSAC的多状态约束卡尔曼滤波方法(MSCKF-NearSAC)。该方法通过限制选择点的范围来消除异常点,显著提高了前端特征点匹配的成功率。在此基础上,提出了后端MSCKF-ZNN方法,将归零神经网络(ZNN)(源自hopfield型神经网络)与误差状态相结合,使输出轨迹误差呈指数收敛,从而提高了SLAM系统的轨迹精度。所提出的算法MSCKF-NearSAC和MSCKF-ZNN在立体多状态约束卡尔曼滤波系统(S-MSCKF)中得到了很好的应用。大量的比较实验,利用精确的测量和校准技术,在开源数据集和现实环境中进行。实验结果表明,与其他算法相比,该方法具有更高的稳定性。
{"title":"A new visual-inertial odometry scheme for unmanned systems in unified framework of zeroing neural networks","authors":"Dechao Chen ,&nbsp;Jianan Jiang ,&nbsp;Zhixiong Wang ,&nbsp;Shuai Li","doi":"10.1016/j.neucom.2024.129017","DOIUrl":"10.1016/j.neucom.2024.129017","url":null,"abstract":"<div><div>In recent years, multi-sensor fusion has gained significant attention from researchers and is used extensively in simultaneous localization and mapping (SLAM) applications, such as visual-inertial odometry (VIO). This technology primarily utilizes visual and odometry measurements for unmanned aerial vehicles (UAVs) to estimate their position, orientation, and environment. However, in most previous works, the input error data of sensors in the system were considered independent. To improve system precision and fully utilize sensor data, a new method called Multi-State Constraint Kalman Filter with NearSAC (MSCKF-NearSAC), based on the MSCKF, is proposed. This method eliminates outliers by limiting the range of selected points, which significantly improves the success rate of feature point matching in the front-end. Furthermore, the MSCKF-ZNN method is proposed for the back-end, and combines zeroing neural network (ZNN) (originated from the Hopfield-type neural network) and error state, resulting in an exponentially converging output trajectory error, thus improving the trajectory precision of the SLAM system. The proposed algorithms, MSCKF-NearSAC and MSCKF-ZNN, are used in the excellent work of the stereo multi-state constraint Kalman filter system (S-MSCKF). A plethora of comparison experiments, utilizing precise measurement and calibration techniques, are conducted on open-source datasets and real-world environments. Experimental results demonstrate that the introduced approach exhibits higher stability in contrast to other algorithms.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129017"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards robust DeepFake distortion attack via adversarial autoaugment 通过对抗性自动增强实现稳健的DeepFake失真攻击
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129011
Qi Guo , Shanmin Pang , Zhikai Chen , Qing Guo
Face forgery by DeepFake is posing a potential threat to society. Previous studies have shown that adversarial examples can effectively disrupt DeepFake models. However, the practical application of adversarial examples to defend against DeepFake is limited due to the existence of various input transformations. To address this issue, we propose a Robust DeepFake Distortion Attack (RDDA) method from the perspective of data augmentation, which uses adversarial autoaugment to generate robust and generalized adversarial examples to disrupt DeepFake. Specifically, we design an adversarial autoaugment module to synthesize diverse and challenging input transformations. Through coping with these transformations, the robustness and generalization ability of the adversarial examples in disrupting DeepFake models are greatly enhanced. In addition, we further improve the generalization ability of adversarial examples in handling specific input transformations by incremental learning. With RDDA and incremental learning, our generated adversarial examples can effectively protect personal privacy from being violated by DeepFake. Extensive experiments on public benchmarks demonstrate that our DeepFake defense method has better robustness and generalization ability than state-of-the-arts.
DeepFake的人脸伪造技术对社会构成了潜在威胁。之前的研究表明,对抗性示例可以有效地破坏DeepFake模型。然而,由于存在各种输入变换,对抗性示例用于防御DeepFake的实际应用受到限制。为了解决这个问题,我们从数据增强的角度提出了一种鲁棒DeepFake失真攻击(RDDA)方法,该方法使用对抗性自动增强生成鲁棒和广义对抗性示例来破坏DeepFake。具体来说,我们设计了一个对抗性的自动增强模块来综合各种具有挑战性的输入转换。通过处理这些变换,颠覆DeepFake模型的对抗样例鲁棒性和泛化能力得到了极大的提高。此外,我们通过增量学习进一步提高了对抗样例在处理特定输入变换时的泛化能力。通过RDDA和增量学习,我们生成的对抗性示例可以有效地保护个人隐私不被DeepFake侵犯。在公共基准测试上的大量实验表明,我们的DeepFake防御方法比目前最先进的方法具有更好的鲁棒性和泛化能力。
{"title":"Towards robust DeepFake distortion attack via adversarial autoaugment","authors":"Qi Guo ,&nbsp;Shanmin Pang ,&nbsp;Zhikai Chen ,&nbsp;Qing Guo","doi":"10.1016/j.neucom.2024.129011","DOIUrl":"10.1016/j.neucom.2024.129011","url":null,"abstract":"<div><div>Face forgery by DeepFake is posing a potential threat to society. Previous studies have shown that adversarial examples can effectively disrupt DeepFake models. However, the practical application of adversarial examples to defend against DeepFake is limited due to the existence of various input transformations. To address this issue, we propose a <strong>R</strong>obust <strong>D</strong>eepFake <strong>D</strong>istortion <strong>A</strong>ttack (RDDA) method from the perspective of data augmentation, which uses adversarial autoaugment to generate robust and generalized adversarial examples to disrupt DeepFake. Specifically, we design an adversarial autoaugment module to synthesize diverse and challenging input transformations. Through coping with these transformations, the robustness and generalization ability of the adversarial examples in disrupting DeepFake models are greatly enhanced. In addition, we further improve the generalization ability of adversarial examples in handling specific input transformations by incremental learning. With RDDA and incremental learning, our generated adversarial examples can effectively protect personal privacy from being violated by DeepFake. Extensive experiments on public benchmarks demonstrate that our DeepFake defense method has better robustness and generalization ability than state-of-the-arts.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129011"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point cloud feature consistency learning for incomplete 3D face recognition 不完全三维人脸识别的点云特征一致性学习
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129000
Faxiu Huang , Yanqiong Guo , Zhu Xu , Zhisheng You , Xiao Yang
Point cloud-based 3D face recognition has emerged as an exciting research topic due to the availability of 3D facial structures and detailed surface information. Existing approaches have primarily focused on complete facial point clouds and have achieved remarkable results. However, in real-world applications, the collected facial point clouds are often incomplete due to factors such as various poses, occlusion, and noise, posing significant challenges to face recognition tasks. In this paper, a feature consistency learning framework is proposed to improve incomplete 3D face recognition. The feature gap between incomplete and complete data is filled through joint optimization of completion and supervised contrastive learning. Specifically, to maintain and enhance the structure of incomplete point clouds, we introduce a structure-enhanced representation method for neighboring points that incorporates positional information residuals during the formation of point proxies. Additionally, a simple and effective dynamic input approach within the point proxy completion process is designed to alleviate concerns related to density disparities and detail loss in point clouds that exhibit relatively minor degrees of incompleteness. Extensive experiments on four datasets demonstrate our proposed method outperforms state-of-the-art methods on both inherent and artificially constructed incomplete data. Moreover, it also achieves comparable results on complete 3D face recognition. Overall, this work represents an early exploration into the realm of point cloud-based incomplete 3D face recognition through feature consistency learning, providing a promising approach for practical applications.
基于点云的三维人脸识别已经成为一个令人兴奋的研究课题,因为可以获得三维人脸结构和详细的表面信息。现有的方法主要集中在完整的面部点云上,并取得了显著的效果。然而,在实际应用中,由于各种姿势、遮挡和噪声等因素,收集到的人脸点云往往是不完整的,这给人脸识别任务带来了重大挑战。本文提出了一种特征一致性学习框架来改进不完全三维人脸识别。通过补全和监督对比学习的联合优化来填补不完整和完整数据之间的特征缺口。具体而言,为了保持和增强不完整点云的结构,我们引入了一种结构增强的相邻点表示方法,该方法在点代理形成过程中包含位置信息残差。此外,在点代理补全过程中,设计了一种简单有效的动态输入方法,以减轻与密度差异和点云细节丢失相关的担忧,这些点云表现出相对较小程度的不完整性。在四个数据集上进行的大量实验表明,我们提出的方法在固有和人为构建的不完整数据上都优于最先进的方法。此外,它在完整的3D人脸识别上也取得了类似的结果。总的来说,这项工作代表了通过特征一致性学习对基于点云的不完整3D人脸识别领域的早期探索,为实际应用提供了一种有前途的方法。
{"title":"Point cloud feature consistency learning for incomplete 3D face recognition","authors":"Faxiu Huang ,&nbsp;Yanqiong Guo ,&nbsp;Zhu Xu ,&nbsp;Zhisheng You ,&nbsp;Xiao Yang","doi":"10.1016/j.neucom.2024.129000","DOIUrl":"10.1016/j.neucom.2024.129000","url":null,"abstract":"<div><div>Point cloud-based 3D face recognition has emerged as an exciting research topic due to the availability of 3D facial structures and detailed surface information. Existing approaches have primarily focused on complete facial point clouds and have achieved remarkable results. However, in real-world applications, the collected facial point clouds are often incomplete due to factors such as various poses, occlusion, and noise, posing significant challenges to face recognition tasks. In this paper, a feature consistency learning framework is proposed to improve incomplete 3D face recognition. The feature gap between incomplete and complete data is filled through joint optimization of completion and supervised contrastive learning. Specifically, to maintain and enhance the structure of incomplete point clouds, we introduce a structure-enhanced representation method for neighboring points that incorporates positional information residuals during the formation of point proxies. Additionally, a simple and effective dynamic input approach within the point proxy completion process is designed to alleviate concerns related to density disparities and detail loss in point clouds that exhibit relatively minor degrees of incompleteness. Extensive experiments on four datasets demonstrate our proposed method outperforms state-of-the-art methods on both inherent and artificially constructed incomplete data. Moreover, it also achieves comparable results on complete 3D face recognition. Overall, this work represents an early exploration into the realm of point cloud-based incomplete 3D face recognition through feature consistency learning, providing a promising approach for practical applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129000"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGAT: A novel model for multivariate time series anomaly detection with improved anomaly transformer and learning graph structures 基于改进的异常变压器和学习图结构的多元时间序列异常检测新模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129024
Mi Wen , ZheHui Chen , Yun Xiong , YiChuan Zhang
Time series anomaly detection involves identifying data points in continuously collected datasets that deviate from normal patterns. Given that real-world systems often consist of multiple variables, detecting anomalies in multivariate datasets has become a key focus of current research. This challenge has wide-ranging applications across various industries for system maintenance, such as in water treatment and distribution networks, transportation, and autonomous vehicles, thus driving active research in the field of time series anomaly detection. However, traditional methods primarily address this issue by predicting and reconstructing input time steps, but they still suffer from problems of overgeneralization and inconsistency in providing high performance for reasoning about complex dynamics. In response, we propose a novel unsupervised model called LGAT, which can automatically learn graph structures and leverage an enhanced Anomaly Transformer architecture to capture temporal dependencies. Moreover, the model features a new encoder–decoder architecture designed to enhance context extraction capabilities. In particular, the model calculates anomaly scores for multivariate time series anomaly detection by combining the reconstruction of input time series with the model’s computed prior associations and sequential correlations. This model captures inter-variable relationships and exhibit stronger context extraction abilities, making it more sensitive to anomaly detection. Extensive experiments on six common anomaly detection benchmarks further demonstrate the superiority of our approach over other state-of-the-art methods, with an improvement of approximately 1.2% across various metrics.
时间序列异常检测涉及识别连续收集的数据集中偏离正常模式的数据点。鉴于现实世界的系统通常由多个变量组成,在多变量数据集中检测异常已成为当前研究的重点。这一挑战在各个行业的系统维护中有着广泛的应用,例如水处理和配电网络、运输和自动驾驶汽车,从而推动了时间序列异常检测领域的积极研究。然而,传统方法主要通过预测和重构输入时间步来解决这一问题,但它们仍然存在过度泛化和不一致的问题,无法为复杂动态的推理提供高性能。作为回应,我们提出了一种新的无监督模型,称为LGAT,它可以自动学习图结构,并利用增强的异常转换器体系结构来捕获时间依赖性。此外,该模型具有新的编码器-解码器架构,旨在增强上下文提取能力。特别是,该模型通过将输入时间序列的重建与模型计算的先验关联和顺序相关性相结合,计算多元时间序列异常检测的异常分数。该模型捕获了变量间的关系,并表现出更强的上下文提取能力,使其对异常检测更敏感。在六个常见的异常检测基准上进行的大量实验进一步证明了我们的方法比其他最先进的方法优越,在各种指标上的改进约为1.2%。
{"title":"LGAT: A novel model for multivariate time series anomaly detection with improved anomaly transformer and learning graph structures","authors":"Mi Wen ,&nbsp;ZheHui Chen ,&nbsp;Yun Xiong ,&nbsp;YiChuan Zhang","doi":"10.1016/j.neucom.2024.129024","DOIUrl":"10.1016/j.neucom.2024.129024","url":null,"abstract":"<div><div>Time series anomaly detection involves identifying data points in continuously collected datasets that deviate from normal patterns. Given that real-world systems often consist of multiple variables, detecting anomalies in multivariate datasets has become a key focus of current research. This challenge has wide-ranging applications across various industries for system maintenance, such as in water treatment and distribution networks, transportation, and autonomous vehicles, thus driving active research in the field of time series anomaly detection. However, traditional methods primarily address this issue by predicting and reconstructing input time steps, but they still suffer from problems of overgeneralization and inconsistency in providing high performance for reasoning about complex dynamics. In response, we propose a novel unsupervised model called LGAT, which can automatically learn graph structures and leverage an enhanced Anomaly Transformer architecture to capture temporal dependencies. Moreover, the model features a new encoder–decoder architecture designed to enhance context extraction capabilities. In particular, the model calculates anomaly scores for multivariate time series anomaly detection by combining the reconstruction of input time series with the model’s computed prior associations and sequential correlations. This model captures inter-variable relationships and exhibit stronger context extraction abilities, making it more sensitive to anomaly detection. Extensive experiments on six common anomaly detection benchmarks further demonstrate the superiority of our approach over other state-of-the-art methods, with an improvement of approximately 1.2% across various metrics.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129024"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the characterization of representations learned via capsule-based network architectures 通过基于胶囊的网络架构学习表征
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.129027
Saja Tawalbeh, José Oramas
Capsule Neural Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. We pay special attention towards analyzing the level to which part-whole relationships are encoded within the learned representation. Our analysis in the MNIST, SVHN, CIFAR-10, and CelebA datasets on several capsule-based architectures suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
胶囊神经网络(CapsNets)作为一种更紧凑和可解释的替代标准深度神经网络而被重新引入。虽然最近的努力已经证明了它们的压缩能力,但迄今为止,它们的可解释性尚未得到充分评估。在这里,我们对评估这些类型的网络的可解释性进行了系统和原则性的研究。我们特别注意分析部分-整体关系在学习表征中的编码程度。我们对几个基于胶囊架构的MNIST、SVHN、CIFAR-10和CelebA数据集的分析表明,在胶囊中编码的表示可能不像文献中通常所说的那样与部分-整体关系分开或严格相关。
{"title":"Towards the characterization of representations learned via capsule-based network architectures","authors":"Saja Tawalbeh,&nbsp;José Oramas","doi":"10.1016/j.neucom.2024.129027","DOIUrl":"10.1016/j.neucom.2024.129027","url":null,"abstract":"<div><div>Capsule Neural Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. We pay special attention towards analyzing the level to which <em>part-whole</em> relationships are encoded within the learned representation. Our analysis in the MNIST, SVHN, CIFAR-10, and CelebA datasets on several capsule-based architectures suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to <em>parts-whole</em> relationships as is commonly stated in the literature.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129027"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy-preassigned fixed-time synchronization of switched inertial neural networks with time-varying distributed, leakage and transmission delays 时变分布时延、泄漏时延和传输时延切换惯性神经网络的精度预分配固定时间同步
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.neucom.2024.128958
Shilei Yuan , Yantao Wang , Xiaona Yang , Xian Zhang
In this paper, the accuracy-preassigned fixed-time synchronization problem of a class of switched inertial neural networks with time-varying distributed, leakage and transmission delays is studied. To this end, a parameterized system solution-based direct analysis method is proposed for the first time. Unlike existing works, this method sets out from the definition of accuracy-preassigned fixed-time synchronization, and does not require variable substitution for inertial item or the construction of any Lyapunov–Krasovskii functional. This not only simplifies the proof process, but also reduces the computational complexity for solving synchronization conditions. Significantly, this paper introduced the time-varying leakage delay into switched inertial neural networks for the first time. Furthermore, the approach utilized in this manuscript stands apart from all previous techniques for achieving fixed-time synchronization. Finally, the reliability of the theoretical results is verified by numerical simulation.
研究了一类具有时变分布时延、泄漏时延和传输时延的切换惯性神经网络的精度预分配固定时间同步问题。为此,首次提出了一种基于参数化系统解的直接分析方法。与现有的工作不同,该方法从精度预置固定时间同步的定义出发,不需要对惯性项进行变量替换,也不需要构造任何Lyapunov-Krasovskii泛函。这不仅简化了证明过程,而且降低了求解同步条件的计算复杂度。值得注意的是,本文首次将时变泄漏延迟引入切换惯性神经网络。此外,本文中使用的方法与以前实现固定时间同步的所有技术不同。最后,通过数值模拟验证了理论结果的可靠性。
{"title":"Accuracy-preassigned fixed-time synchronization of switched inertial neural networks with time-varying distributed, leakage and transmission delays","authors":"Shilei Yuan ,&nbsp;Yantao Wang ,&nbsp;Xiaona Yang ,&nbsp;Xian Zhang","doi":"10.1016/j.neucom.2024.128958","DOIUrl":"10.1016/j.neucom.2024.128958","url":null,"abstract":"<div><div>In this paper, the accuracy-preassigned fixed-time synchronization problem of a class of switched inertial neural networks with time-varying distributed, leakage and transmission delays is studied. To this end, a parameterized system solution-based direct analysis method is proposed for the first time. Unlike existing works, this method sets out from the definition of accuracy-preassigned fixed-time synchronization, and does not require variable substitution for inertial item or the construction of any Lyapunov–Krasovskii functional. This not only simplifies the proof process, but also reduces the computational complexity for solving synchronization conditions. Significantly, this paper introduced the time-varying leakage delay into switched inertial neural networks for the first time. Furthermore, the approach utilized in this manuscript stands apart from all previous techniques for achieving fixed-time synchronization. Finally, the reliability of the theoretical results is verified by numerical simulation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128958"},"PeriodicalIF":5.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototype matching-based meta-learning model for few-shot fault diagnosis of mechanical system 基于原型匹配的机械系统小故障元学习模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 DOI: 10.1016/j.neucom.2024.129012
Lin Lin, Sihao Zhang, Song Fu, Yikun Liu, Shiwei Suo, Guolei Hu
The efficacy of advanced deep-learning diagnostic methods is contingent mainly upon sufficient trainable data for each fault category. However, gathering ample data in real-world scenarios is often challenging, rendering these deep-learning techniques ineffective. This paper introduces a novel Prototype Matching-based Meta-Learning (PMML) approach to address the few-shot fault diagnosis under constrained data conditions. Initially, the PMML’s feature extractor is meta-trained within the Model-Agnostic Meta-Learning framework, utilizing multiple fault classification tasks from known operational conditions in the source domain to acquire prior meta-knowledge for fault diagnosis. Subsequently, the trained feature extractor is employed to derive meta-features from few-shot samples in the target domain, and metric learning is conducted to facilitate swift and precise few-shot fault diagnosis, leveraging meta-knowledge and similarity information across sample sets. Moreover, instead of utilizing all target domain samples, the prototype of each fault category is used to capture similarity information between support and query samples. Concurrently, BiLSTM is employed to selectively embed the meta-feature prototype, enabling the extraction of more distinguishable metric features for enhanced metric learning. Finally, the effectiveness of the proposed PMML is validated through a series of comparative experiments on two fault datasets, demonstrating its outstanding performance in addressing both zero-shot and few-shot fault diagnosis challenges.
先进的深度学习诊断方法的有效性主要取决于每个故障类别有足够的可训练数据。然而,在现实场景中收集充足的数据通常具有挑战性,这使得这些深度学习技术无效。提出了一种基于原型匹配的元学习(PMML)方法来解决有限数据条件下的少弹故障诊断问题。最初,PMML的特征提取器在模型不可知元学习框架内进行元训练,利用来自源域中已知操作条件的多个故障分类任务来获取用于故障诊断的先验元知识。随后,利用训练好的特征提取器从目标域的小样本中提取元特征,并利用样本集的元知识和相似度信息进行度量学习,实现快速、精确的小样本故障诊断。此外,该方法不是利用所有目标域样本,而是利用每个故障类别的原型来捕获支持样本和查询样本之间的相似度信息。同时,利用BiLSTM有选择性地嵌入元特征原型,提取更多可区分的度量特征,增强度量学习。最后,通过在两个故障数据集上的一系列对比实验,验证了所提PMML的有效性,证明了其在解决零采样和少采样故障诊断挑战方面的出色性能。
{"title":"Prototype matching-based meta-learning model for few-shot fault diagnosis of mechanical system","authors":"Lin Lin,&nbsp;Sihao Zhang,&nbsp;Song Fu,&nbsp;Yikun Liu,&nbsp;Shiwei Suo,&nbsp;Guolei Hu","doi":"10.1016/j.neucom.2024.129012","DOIUrl":"10.1016/j.neucom.2024.129012","url":null,"abstract":"<div><div>The efficacy of advanced deep-learning diagnostic methods is contingent mainly upon sufficient trainable data for each fault category. However, gathering ample data in real-world scenarios is often challenging, rendering these deep-learning techniques ineffective. This paper introduces a novel Prototype Matching-based Meta-Learning (PMML) approach to address the few-shot fault diagnosis under constrained data conditions. Initially, the PMML’s feature extractor is meta-trained within the Model-Agnostic Meta-Learning framework, utilizing multiple fault classification tasks from known operational conditions in the source domain to acquire prior meta-knowledge for fault diagnosis. Subsequently, the trained feature extractor is employed to derive meta-features from few-shot samples in the target domain, and metric learning is conducted to facilitate swift and precise few-shot fault diagnosis, leveraging meta-knowledge and similarity information across sample sets. Moreover, instead of utilizing all target domain samples, the prototype of each fault category is used to capture similarity information between support and query samples. Concurrently, BiLSTM is employed to selectively embed the meta-feature prototype, enabling the extraction of more distinguishable metric features for enhanced metric learning. Finally, the effectiveness of the proposed PMML is validated through a series of comparative experiments on two fault datasets, demonstrating its outstanding performance in addressing both zero-shot and few-shot fault diagnosis challenges.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129012"},"PeriodicalIF":5.5,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1