首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
Large Language Models for Link Stealing Attacks Against Graph Neural Networks 面向图神经网络的链接窃取攻击的大型语言模型
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-07 DOI: 10.1109/TBDATA.2024.3489427
Faqian Guan;Tianqing Zhu;Hui Sun;Wanlei Zhou;Philip S. Yu
Graph data contains rich node features and unique edge information, which have been applied across various domains, such as citation networks or recommendation systems. Graph Neural Networks (GNNs) are specialized for handling such data and have shown impressive performance in many applications. However, GNNs may contain of sensitive information and susceptible to privacy attacks. For example, link stealing is a type of attack in which attackers infer whether two nodes are linked or not. Previous link stealing attacks primarily relied on posterior probabilities from the target GNN model, neglecting the significance of node features. Additionally, variations in node classes across different datasets lead to different dimensions of posterior probabilities. The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets. To address these challenges, we introduce Large Language Models (LLMs) to perform link stealing attacks on GNNs. LLMs can effectively integrate textual features and exhibit strong generalizability, enabling attacks to handle diverse data dimensions across various datasets. We design two distinct LLM prompts to effectively combine textual features and posterior probabilities of graph nodes. Through these designed prompts, we fine-tune the LLM to adapt to the link stealing attack task. Furthermore, we fine-tune the LLM using multiple datasets and enable the LLM to learn features from different datasets simultaneously. Experimental results show that our approach significantly enhances the performance of existing link stealing attack tasks in both white-box and black-box scenarios. Our method can execute link stealing attacks across different datasets using only a single model, making link stealing attacks more applicable to real-world scenarios.
图数据包含丰富的节点特征和独特的边缘信息,已被应用于各种领域,如引文网络或推荐系统。图神经网络(gnn)专门用于处理此类数据,并在许多应用中显示出令人印象深刻的性能。然而,gnn可能包含敏感信息,容易受到隐私攻击。例如,链路窃取是一种攻击者推断两个节点是否链接的攻击类型。以往的链路窃取攻击主要依赖于目标GNN模型的后验概率,忽略了节点特征的重要性。此外,不同数据集节点类的变化导致后验概率的不同维度。对这些不同数据维度的处理对使用单一模型有效地对不同数据集进行链接窃取攻击提出了挑战。为了解决这些挑战,我们引入了大型语言模型(llm)来对gnn执行链路窃取攻击。llm可以有效地集成文本特征,并表现出强大的泛化能力,使攻击能够处理跨各种数据集的不同数据维度。我们设计了两个不同的LLM提示,以有效地结合文本特征和图节点的后验概率。通过这些设计的提示符,我们对LLM进行了微调,以适应链路窃取攻击任务。此外,我们使用多个数据集对LLM进行微调,使LLM能够同时从不同的数据集学习特征。实验结果表明,无论在白盒还是黑盒场景下,我们的方法都能显著提高现有链路窃取攻击任务的性能。我们的方法可以使用单一模型跨不同数据集执行链接窃取攻击,使链接窃取攻击更适用于现实场景。
{"title":"Large Language Models for Link Stealing Attacks Against Graph Neural Networks","authors":"Faqian Guan;Tianqing Zhu;Hui Sun;Wanlei Zhou;Philip S. Yu","doi":"10.1109/TBDATA.2024.3489427","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489427","url":null,"abstract":"Graph data contains rich node features and unique edge information, which have been applied across various domains, such as citation networks or recommendation systems. Graph Neural Networks (GNNs) are specialized for handling such data and have shown impressive performance in many applications. However, GNNs may contain of sensitive information and susceptible to privacy attacks. For example, link stealing is a type of attack in which attackers infer whether two nodes are linked or not. Previous link stealing attacks primarily relied on posterior probabilities from the target GNN model, neglecting the significance of node features. Additionally, variations in node classes across different datasets lead to different dimensions of posterior probabilities. The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets. To address these challenges, we introduce Large Language Models (LLMs) to perform link stealing attacks on GNNs. LLMs can effectively integrate textual features and exhibit strong generalizability, enabling attacks to handle diverse data dimensions across various datasets. We design two distinct LLM prompts to effectively combine textual features and posterior probabilities of graph nodes. Through these designed prompts, we fine-tune the LLM to adapt to the link stealing attack task. Furthermore, we fine-tune the LLM using multiple datasets and enable the LLM to learn features from different datasets simultaneously. Experimental results show that our approach significantly enhances the performance of existing link stealing attack tasks in both white-box and black-box scenarios. Our method can execute link stealing attacks across different datasets using only a single model, making link stealing attacks more applicable to real-world scenarios.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1879-1893"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Centric Graph Learning: A Survey 以数据为中心的图学习:综述
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-06 DOI: 10.1109/TBDATA.2024.3489412
Yuxin Guo;Deyu Bo;Cheng Yang;Zhiyuan Lu;Zhongjian Zhang;Jixi Liu;Yufei Peng;Chuan Shi
The history of artificial intelligence (AI) has witnessed the significant impact of high-quality data on various deep learning models, such as ImageNet for AlexNet and ResNet. Recently, instead of designing more complex neural architectures as model-centric approaches, the attention of AI community has shifted to data-centric ones, which focuses on better processing data to strengthen the ability of neural models. Graph learning, which operates on ubiquitous topological data, also plays an important role in the era of deep learning. In this survey, we comprehensively review graph learning approaches from the data-centric perspective, and aim to answer three crucial questions: (1) when to modify graph data, (2) what part of the graph data needs modification to unlock the potential of various graph models, and (3) how to safeguard graph models from problematic data influence. Accordingly, we propose a novel taxonomy based on the stages in the graph learning pipeline, and highlight the processing methods for different data structures in the graph data, i.e., topology, feature and label. Furthermore, we analyze some potential problems embedded in graph data and discuss how to solve them in a data-centric manner. Finally, we provide some promising future directions for data-centric graph learning.
人工智能(AI)的历史见证了高质量数据对各种深度学习模型(如ImageNet for AlexNet和ResNet)的重大影响。最近,人工智能界的注意力从设计更复杂的神经架构作为以模型为中心的方法,转向以数据为中心的方法,即更好地处理数据以增强神经模型的能力。​在本调查中,我们从数据中心的角度全面回顾了图学习方法,并旨在回答三个关键问题:(1)何时修改图数据,(2)需要修改图数据的哪一部分以释放各种图模型的潜力,以及(3)如何保护图模型免受问题数据的影响。因此,我们提出了一种基于图学习管道阶段的新分类方法,并重点介绍了图数据中不同数据结构(拓扑、特征和标签)的处理方法。此外,我们还分析了图数据中的一些潜在问题,并讨论了如何以数据为中心的方式解决这些问题。最后,我们为以数据为中心的图学习提供了一些有希望的未来方向。
{"title":"Data-Centric Graph Learning: A Survey","authors":"Yuxin Guo;Deyu Bo;Cheng Yang;Zhiyuan Lu;Zhongjian Zhang;Jixi Liu;Yufei Peng;Chuan Shi","doi":"10.1109/TBDATA.2024.3489412","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489412","url":null,"abstract":"The history of artificial intelligence (AI) has witnessed the significant impact of high-quality data on various deep learning models, such as ImageNet for AlexNet and ResNet. Recently, instead of designing more complex neural architectures as model-centric approaches, the attention of AI community has shifted to data-centric ones, which focuses on better processing data to strengthen the ability of neural models. Graph learning, which operates on ubiquitous topological data, also plays an important role in the era of deep learning. In this survey, we comprehensively review graph learning approaches from the data-centric perspective, and aim to answer three crucial questions: <italic>(1) when to modify graph data</i>, <italic>(2) what part of the graph data needs modification</i> to unlock the potential of various graph models, and <italic>(3) how to safeguard graph models</i> from problematic data influence. Accordingly, we propose a novel taxonomy based on the stages in the graph learning pipeline, and highlight the processing methods for different data structures in the graph data, i.e., topology, feature and label. Furthermore, we analyze some potential problems embedded in graph data and discuss how to solve them in a data-centric manner. Finally, we provide some promising future directions for data-centric graph learning.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 1","pages":"1-20"},"PeriodicalIF":7.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnesFormer: An End-to-End Framework for EEG-Based Anesthetic State Classification AnesFormer:基于脑电图的麻醉状态分类的端到端框架
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 DOI: 10.1109/TBDATA.2024.3489419
Qihang Wang;Ying Chen;Qinge Xiao
To determine the real-time changes in brain arousal introduced by anesthetics, Electroencephalogram (EEG) is often used as an objective neuroimaging evidence to link the neurobehavioral states of patients. However, EEG signals often suffer from a low signal-to-noise ratio due to environmental noise and artifacts, which limits its application for a reliable estimation of depth of anesthesia (DoA), especially under high cross-subject variability. In this study, we propose an end-to-end deep learning based framework, termed as AnesFormer, which contains a data selection model, a self-attention based classification model, and a baseline update mechanism. These three components are integrated in a dynamic and seamless manner to achieve the goal of improving the effectiveness and robustness of DoA estimation in a leave-one-out setting. In the experiment, we apply the proposed framework to an office-based dataset and a hospital-based dataset, and use seven existing models as benchmarks. In addition, we conduct an ablation experiment to show the significance of each component in AnesFormer. Our main results indicate that 1) the proposed framework generally performs better than the existing methods for DoA estimation in terms of effectiveness and robustness; 2) each designed component in AnesFormer is likely to contribute to the DoA classification improvement.
为了确定麻醉药引起的脑觉醒的实时变化,脑电图(EEG)常被用作客观的神经影像学证据来联系患者的神经行为状态。然而,由于环境噪声和伪影的影响,脑电图信号往往具有较低的信噪比,这限制了其在麻醉深度(DoA)的可靠估计中的应用,特别是在高交叉受试者变异性的情况下。在这项研究中,我们提出了一个端到端基于深度学习的框架,称为AnesFormer,它包含一个数据选择模型、一个基于自关注的分类模型和一个基线更新机制。这三个组件以动态无缝的方式集成在一起,以达到提高留一设置下DoA估计的有效性和鲁棒性的目的。在实验中,我们将提出的框架应用于基于办公室的数据集和基于医院的数据集,并使用七个现有模型作为基准。此外,我们还进行了烧蚀实验,以显示AnesFormer中每个组件的重要性。研究结果表明:1)该框架在有效性和鲁棒性方面总体优于现有的DoA估计方法;2) AnesFormer中的每个设计组件都可能有助于DoA分类的改进。
{"title":"AnesFormer: An End-to-End Framework for EEG-Based Anesthetic State Classification","authors":"Qihang Wang;Ying Chen;Qinge Xiao","doi":"10.1109/TBDATA.2024.3489419","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489419","url":null,"abstract":"To determine the real-time changes in brain arousal introduced by anesthetics, Electroencephalogram (EEG) is often used as an objective neuroimaging evidence to link the neurobehavioral states of patients. However, EEG signals often suffer from a low signal-to-noise ratio due to environmental noise and artifacts, which limits its application for a reliable estimation of depth of anesthesia (DoA), especially under high cross-subject variability. In this study, we propose an end-to-end deep learning based framework, termed as AnesFormer, which contains a data selection model, a self-attention based classification model, and a baseline update mechanism. These three components are integrated in a dynamic and seamless manner to achieve the goal of improving the effectiveness and robustness of DoA estimation in a leave-one-out setting. In the experiment, we apply the proposed framework to an office-based dataset and a hospital-based dataset, and use seven existing models as benchmarks. In addition, we conduct an ablation experiment to show the significance of each component in AnesFormer. Our main results indicate that 1) the proposed framework generally performs better than the existing methods for DoA estimation in terms of effectiveness and robustness; 2) each designed component in AnesFormer is likely to contribute to the DoA classification improvement.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"1357-1368"},"PeriodicalIF":7.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable Learning via Dual Feature Learning 通过双特征学习实现稳定学习
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 DOI: 10.1109/TBDATA.2024.3489413
Shuai Yang;Xin Li;Minzhi Wu;Qianlong Dang;Lichuan Gu
Stable learning aims to leverage the knowledge in a relevant source domain to learn a prediction model that can generalize well to target domains. Recent advances in stable learning mainly proceed by eliminating spurious correlations between irrelevant features and labels through sample reweighting or causal feature selection. However, most existing stable learning methods either only weaken partial spurious correlations or discard part of true causal relationships, resulting in generalization performance degradation. To tackle these issues, we propose the Dual Feature Learning (DFL) algorithm for stable learning, which consists of two phases. Phase 1 first learns a set of sample weights to balance the distribution of treated and control groups corresponding to each feature, and then uses the learned sample weights to assist feature selection to identify part of irrelevant features for completely isolating spurious correlations between these irrelevant features and labels. Phase 2 first learns two groups of sample weights again using the subdataset after feature selection, and then obtains high-quality feature representations by integrating a weighted cross-entropy model and an autoencoder model to further get rid of spurious correlations. Using synthetic and four real-world datasets, the experiments have verified the effectiveness of DFL, in comparison with eleven state-of-the-art methods.
稳定学习的目的是利用相关源领域的知识来学习一种能够很好地泛化到目标领域的预测模型。稳定学习的最新进展主要是通过样本重加权或因果特征选择来消除不相关特征和标签之间的虚假相关性。然而,现有的大多数稳定学习方法要么只是弱化部分伪相关,要么丢弃部分真因果关系,导致泛化性能下降。为了解决这些问题,我们提出了用于稳定学习的双特征学习(Dual Feature Learning, DFL)算法,该算法包括两个阶段。阶段1首先学习一组样本权值来平衡每个特征对应的实验组和对照组的分布,然后使用学习到的样本权值来辅助特征选择来识别部分不相关的特征,从而完全隔离这些不相关特征与标签之间的虚假关联。阶段2首先利用特征选择后的子数据集再次学习两组样本权值,然后通过加权交叉熵模型和自编码器模型相结合得到高质量的特征表示,进一步去除虚假关联。使用合成数据集和四个实际数据集,实验验证了DFL的有效性,并与11种最先进的方法进行了比较。
{"title":"Stable Learning via Dual Feature Learning","authors":"Shuai Yang;Xin Li;Minzhi Wu;Qianlong Dang;Lichuan Gu","doi":"10.1109/TBDATA.2024.3489413","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489413","url":null,"abstract":"Stable learning aims to leverage the knowledge in a relevant source domain to learn a prediction model that can generalize well to target domains. Recent advances in stable learning mainly proceed by eliminating spurious correlations between irrelevant features and labels through sample reweighting or causal feature selection. However, most existing stable learning methods either only weaken partial spurious correlations or discard part of true causal relationships, resulting in generalization performance degradation. To tackle these issues, we propose the Dual Feature Learning (DFL) algorithm for stable learning, which consists of two phases. Phase 1 first learns a set of sample weights to balance the distribution of treated and control groups corresponding to each feature, and then uses the learned sample weights to assist feature selection to identify part of irrelevant features for completely isolating spurious correlations between these irrelevant features and labels. Phase 2 first learns two groups of sample weights again using the subdataset after feature selection, and then obtains high-quality feature representations by integrating a weighted cross-entropy model and an autoencoder model to further get rid of spurious correlations. Using synthetic and four real-world datasets, the experiments have verified the effectiveness of DFL, in comparison with eleven state-of-the-art methods.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1852-1866"},"PeriodicalIF":7.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-Graphormer: Multi-Channel Graph Transformer for Node Representation Learning m - graphhormer:用于节点表示学习的多通道图转换器
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 DOI: 10.1109/TBDATA.2024.3489418
Xinglong Chang;Jianrong Wang;Mingxiang Wen;Yingkui Wang;Yuxiao Huang
In recent years, the Graph Transformer has demonstrated superiority on various graph-level tasks by facilitating global interactions among nodes. However, as for node-level tasks, the existing Graph Transformer cannot perform as well as expected. Actually, a node in a real-world graph does not necessarily have relationships with every other node, and this global interaction weakens node features. This raises a fundamental question: should we partition out an appropriate interaction channel based on graph structure so that noisy and irrelevant information will be filtered and every node can aggregate information in the optimal channel? We first perform a series of experiments on manually created graphs with varying homophily ratios. Surprisingly, we observe that different graph structures indeed require distinct optimal interaction channels. This leads us to ask whether we can develop a partitioning rule that ensures each node interacts with relevant and valuable targets. To overcome this challenge, we propose a novel Graph Transformer named Multi-channel Graphormer. The model is evaluated on six network datasets with different homophily ratios for the node classification task. Moreover, comprehensive experiments are conducted on two real datasets for the recommendation task. Experimental results show that the Multi-channel Graphormer surpasses state-of-the-art baselines, demonstrating superior performance.
近年来,通过促进节点之间的全局交互,Graph Transformer在各种图级任务上显示出了优势。然而,对于节点级任务,现有的Graph Transformer不能像预期的那样执行得很好。实际上,现实世界图中的节点不一定与其他每个节点都有关系,这种全局交互削弱了节点的特征。这就提出了一个根本性的问题:我们是否应该根据图的结构划分出一个合适的交互通道,过滤掉噪声和不相关的信息,使每个节点都能在最优的通道中聚合信息?我们首先在手工创建的具有不同同态比的图上执行一系列实验。令人惊讶的是,我们观察到不同的图结构确实需要不同的最佳交互通道。这让我们不禁要问,是否可以开发一种分区规则,确保每个节点与相关且有价值的目标交互。为了克服这一挑战,我们提出了一种新的图形转换器,称为多通道图形转换器。该模型在6个具有不同同态比的网络数据集上进行了节点分类任务的评估。此外,针对推荐任务在两个真实数据集上进行了综合实验。实验结果表明,该多通道写真机的性能优于现有的基线。
{"title":"M-Graphormer: Multi-Channel Graph Transformer for Node Representation Learning","authors":"Xinglong Chang;Jianrong Wang;Mingxiang Wen;Yingkui Wang;Yuxiao Huang","doi":"10.1109/TBDATA.2024.3489418","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489418","url":null,"abstract":"In recent years, the Graph Transformer has demonstrated superiority on various graph-level tasks by facilitating global interactions among nodes. However, as for node-level tasks, the existing Graph Transformer cannot perform as well as expected. Actually, a node in a real-world graph does not necessarily have relationships with every other node, and this global interaction weakens node features. This raises a fundamental question: should we partition out an appropriate interaction channel based on graph structure so that noisy and irrelevant information will be filtered and every node can aggregate information in the optimal channel? We first perform a series of experiments on manually created graphs with varying homophily ratios. Surprisingly, we observe that different graph structures indeed require distinct optimal interaction channels. This leads us to ask whether we can develop a partitioning rule that ensures each node interacts with relevant and valuable targets. To overcome this challenge, we propose a novel Graph Transformer named Multi-channel Graphormer. The model is evaluated on six network datasets with different homophily ratios for the node classification task. Moreover, comprehensive experiments are conducted on two real datasets for the recommendation task. Experimental results show that the Multi-channel Graphormer surpasses state-of-the-art baselines, demonstrating superior performance.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1867-1878"},"PeriodicalIF":7.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big Data Analysis for Industrial Activity Recognition Using Attention-Inspired Sequential Temporal Convolution Network 基于注意力启发时序时序卷积网络的工业活动识别大数据分析
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 DOI: 10.1109/TBDATA.2024.3489414
Altaf Hussain;Tanveer Hussain;Waseem Ullah;Samee Ullah Khan;Min Je Kim;Khan Muhammad;Javier Del Ser;Sung Wook Baik
Deep-learning-based human activity recognition (HAR) methods have significantly transformed a wide range of domains over recent years. However, the adoption of Big Data techniques in industrial applications remains challenging due to issues such as generalized weight optimization, diverse viewpoints, and the complex spatiotemporal features of videos. To address these challenges, this work presents an industrial HAR framework consisting of two main phases. First, a squeeze bottleneck attention block (SBAB) is introduced to enhance the learning capabilities of the backbone model for contextual learning, which allows for the selection and refinement of an optimal feature vector. In the second phase, we propose an effective sequential temporal convolutional network (STCN), which is designed in parallel fashion to mitigate the issues of exploding and vanishing gradients associated with sequence learning. The high-dimensional spatiotemporal feature vectors from the STCN undergo further refinement through our proposed SBAB in a sequential manner, to optimize the features for HAR and enhance the overall performance. The efficacy of the proposed framework is validated through extensive experiments on six datasets, including data from industrial and general activities.
近年来,基于深度学习的人类活动识别(HAR)方法极大地改变了许多领域。然而,在工业应用中采用大数据技术仍然具有挑战性,因为存在诸如广义权重优化、视角多样化以及视频复杂的时空特征等问题。为了应对这些挑战,本工作提出了一个由两个主要阶段组成的工业HAR框架。首先,引入挤压瓶颈注意块(SBAB)来增强骨干模型的上下文学习能力,从而选择和细化最优特征向量。在第二阶段,我们提出了一个有效的时序时序卷积网络(STCN),该网络以并行方式设计,以减轻与序列学习相关的梯度爆炸和消失问题。通过我们提出的SBAB,对来自STCN的高维时空特征向量进行进一步的细化,以优化HAR的特征并提高整体性能。通过对六个数据集(包括来自工业和一般活动的数据)的广泛实验,验证了所提议框架的有效性。
{"title":"Big Data Analysis for Industrial Activity Recognition Using Attention-Inspired Sequential Temporal Convolution Network","authors":"Altaf Hussain;Tanveer Hussain;Waseem Ullah;Samee Ullah Khan;Min Je Kim;Khan Muhammad;Javier Del Ser;Sung Wook Baik","doi":"10.1109/TBDATA.2024.3489414","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489414","url":null,"abstract":"Deep-learning-based human activity recognition (HAR) methods have significantly transformed a wide range of domains over recent years. However, the adoption of Big Data techniques in industrial applications remains challenging due to issues such as generalized weight optimization, diverse viewpoints, and the complex spatiotemporal features of videos. To address these challenges, this work presents an industrial HAR framework consisting of two main phases. First, a squeeze bottleneck attention block (SBAB) is introduced to enhance the learning capabilities of the backbone model for contextual learning, which allows for the selection and refinement of an optimal feature vector. In the second phase, we propose an effective sequential temporal convolutional network (STCN), which is designed in parallel fashion to mitigate the issues of exploding and vanishing gradients associated with sequence learning. The high-dimensional spatiotemporal feature vectors from the STCN undergo further refinement through our proposed SBAB in a sequential manner, to optimize the features for HAR and enhance the overall performance. The efficacy of the proposed framework is validated through extensive experiments on six datasets, including data from industrial and general activities.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1840-1851"},"PeriodicalIF":7.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Radio Map Construction With Minimal Manual Intervention: A State Space Model-Based Approach With Imitation Learning 最小人工干预的动态无线电地图构建:一种基于状态空间模型的模仿学习方法
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-31 DOI: 10.1109/TBDATA.2024.3489425
Xiaoqiang Zhu;Tie Qiu;Wenyu Qu;Xiaobo Zhou;Tuo Shi;Tianyi Xu
Fingerprint localization methods typically require a substantial amount of manual effort to collect fingerprint data from various scenarios to construct an accurate radio map. While some existing research has attempted to use path planning strategies to save on labor costs, these approaches often suffer from being time-consuming and prone to locally optimal solutions. To address these shortcomings, our paper proposes a novel approach that utilizes imitation learning to construct and update a highly accurate radio map with minimal manual intervention in dynamic environments. Specifically, we employ a multivariate Gaussian process model to fit a rough standby fingerprint database with only a few pilot data points. We then utilize a state space model to calculate the variation range of the pilot data, which forms the CSI error band used to filter the rough radio map. Imitation learning and a confidence coefficient are utilized to predict and calibrate the global CSI data distribution. And we utilize the K-nearest neighbor algorithm to achieve the real-time localization function. Experimental results show that our proposed algorithm outperforms several state-of-the-art approaches in most test cases, exhibiting low computation complexity, lower localization error, and saving 73.3% of the manual workload.
指纹定位方法通常需要大量的人工工作来从各种场景中收集指纹数据,以构建准确的无线电地图。虽然现有的一些研究试图使用路径规划策略来节省人工成本,但这些方法往往存在耗时且容易产生局部最优解的问题。为了解决这些缺点,我们的论文提出了一种新的方法,利用模仿学习在动态环境中以最少的人工干预来构建和更新高精度的无线电地图。具体来说,我们采用多元高斯过程模型来拟合只有少量先导数据点的粗略备用指纹数据库。然后,我们利用状态空间模型计算飞行员数据的变化范围,形成CSI误差带,用于过滤粗糙的无线电地图。利用模仿学习和置信度系数来预测和校准全球CSI数据分布。并利用k近邻算法实现实时定位功能。实验结果表明,该算法在大多数测试用例中都优于几种最先进的方法,计算复杂度低,定位误差小,节省了73.3%的人工工作量。
{"title":"Dynamic Radio Map Construction With Minimal Manual Intervention: A State Space Model-Based Approach With Imitation Learning","authors":"Xiaoqiang Zhu;Tie Qiu;Wenyu Qu;Xiaobo Zhou;Tuo Shi;Tianyi Xu","doi":"10.1109/TBDATA.2024.3489425","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489425","url":null,"abstract":"Fingerprint localization methods typically require a substantial amount of manual effort to collect fingerprint data from various scenarios to construct an accurate radio map. While some existing research has attempted to use path planning strategies to save on labor costs, these approaches often suffer from being time-consuming and prone to locally optimal solutions. To address these shortcomings, our paper proposes a novel approach that utilizes imitation learning to construct and update a highly accurate radio map with minimal manual intervention in dynamic environments. Specifically, we employ a multivariate Gaussian process model to fit a rough standby fingerprint database with only a few pilot data points. We then utilize a state space model to calculate the variation range of the pilot data, which forms the CSI error band used to filter the rough radio map. Imitation learning and a confidence coefficient are utilized to predict and calibrate the global CSI data distribution. And we utilize the K-nearest neighbor algorithm to achieve the real-time localization function. Experimental results show that our proposed algorithm outperforms several state-of-the-art approaches in most test cases, exhibiting low computation complexity, lower localization error, and saving 73.3% of the manual workload.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1799-1812"},"PeriodicalIF":7.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Based Complex Logical Query on Temporal Knowledge Graph via Graph Neural Network 基于图神经网络的时态知识图的复杂逻辑查询
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-31 DOI: 10.1109/TBDATA.2024.3489421
Luyi Bai;Linshuo Xu;Lin Zhu
Answering complex logical queries on large-scale Knowledge Graphs (KGs) efficiently and accurately has always been crucial for question-answering systems. Recent studies have significantly improved the performance of complex logical queries on massive knowledge graphs by leveraging graph neural networks (GNNs). However, the existing GNN-based methods still have limitations in dealing with long-sequence logical queries. They usually decompose complex queries into multiple independent first-order logical queries, which leads to the inability to optimize globally, and the query accuracy will drop sharply with the increase of query length. In addition, the knowlege in the real world is dynamically changing, but most of the existing methods are more suitable for dealing with static knowledge graphs, and there is still much room for improvement when dealing with logical queries in temporal knowledge graphs. In this paper, we propose a novel Temporal Complex Logical Query (TCLQ) model to achieve temporal logical queries on temporal knowledge graphs. We add time series embedding into GNN, and use multi-layer GRUs to aggregate the node features of previous time and current time, which effectively enhances the time series reasoning ability of the model. In order to solve the problem that the accuracy of logical query model decreases significantly with the increase of query sequence length, we establish a multi-level attention coefficients model to learn and optimize the whole logical queries, thus reducing the error accumulation problem when the queries are decomposed into multiple independent first-order logical queries. We conduct experiments on multiple temporal datasets and demonstrate the effectiveness of TCLQ.
高效、准确地回答大规模知识图(KGs)上的复杂逻辑查询一直是问答系统的关键。近年来的研究利用图神经网络(gnn)显著提高了海量知识图上复杂逻辑查询的性能。然而,现有的基于gnn的方法在处理长序列逻辑查询方面仍然存在局限性。它们通常将复杂查询分解为多个独立的一阶逻辑查询,导致无法进行全局优化,并且随着查询长度的增加,查询精度会急剧下降。此外,现实世界中的知识是动态变化的,但现有的方法大多更适合于处理静态知识图,而处理时态知识图中的逻辑查询还有很大的改进空间。在本文中,我们提出了一种新的时间复杂逻辑查询(TCLQ)模型来实现对时间知识图的时间逻辑查询。我们将时间序列嵌入到GNN中,利用多层gru对前一时刻和当前时刻的节点特征进行聚合,有效增强了模型的时间序列推理能力。为了解决逻辑查询模型的准确性随着查询序列长度的增加而显著下降的问题,我们建立了一个多级注意系数模型来学习和优化整个逻辑查询,从而减少了将查询分解为多个独立的一阶逻辑查询时的错误积累问题。我们在多个时间数据集上进行了实验,验证了TCLQ的有效性。
{"title":"Attention-Based Complex Logical Query on Temporal Knowledge Graph via Graph Neural Network","authors":"Luyi Bai;Linshuo Xu;Lin Zhu","doi":"10.1109/TBDATA.2024.3489421","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489421","url":null,"abstract":"Answering complex logical queries on large-scale Knowledge Graphs (KGs) efficiently and accurately has always been crucial for question-answering systems. Recent studies have significantly improved the performance of complex logical queries on massive knowledge graphs by leveraging graph neural networks (GNNs). However, the existing GNN-based methods still have limitations in dealing with long-sequence logical queries. They usually decompose complex queries into multiple independent first-order logical queries, which leads to the inability to optimize globally, and the query accuracy will drop sharply with the increase of query length. In addition, the knowlege in the real world is dynamically changing, but most of the existing methods are more suitable for dealing with static knowledge graphs, and there is still much room for improvement when dealing with logical queries in temporal knowledge graphs. In this paper, we propose a novel Temporal Complex Logical Query (TCLQ) model to achieve temporal logical queries on temporal knowledge graphs. We add time series embedding into GNN, and use multi-layer GRUs to aggregate the node features of previous time and current time, which effectively enhances the time series reasoning ability of the model. In order to solve the problem that the accuracy of logical query model decreases significantly with the increase of query sequence length, we establish a multi-level attention coefficients model to learn and optimize the whole logical queries, thus reducing the error accumulation problem when the queries are decomposed into multiple independent first-order logical queries. We conduct experiments on multiple temporal datasets and demonstrate the effectiveness of TCLQ.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1828-1839"},"PeriodicalIF":7.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGNN: Decoupled Graph Neural Networks With Structural Consistency Between Attribute and Graph Embedding Representations 在属性和图嵌入表示之间具有结构一致性的解耦图神经网络
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-31 DOI: 10.1109/TBDATA.2024.3489420
Jinlu Wang;Jipeng Guo;Yanfeng Sun;Junbin Gao;Shaofan Wang;Yachao Yang;Baocai Yin
Graph neural networks (GNNs) exhibit a robust capability for representation learning on graphs with complex structures, demonstrating superior performance across various applications. Most existing GNNs utilize graph convolution operations that integrate both attribute and structural information through coupled way. And these GNNs, from an optimization perspective, seek to learn a consensus and compromised embedding representation that balances attribute and graph information, selectively exploring and retaining valid information in essence. To obtain a more comprehensive embedding representation, a novel GNN framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced. DGNN separately explores distinctive embedding representations from the attribute and graph spaces by decoupled terms. Considering that the semantic graph, derived from attribute feature space, contains different node connection information and provides enhancement for the topological graph, both topological and semantic graphs are integrated by DGNN for powerful embedding representation learning. Further, structural consistency between the attribute embedding and the graph embedding is promoted to effectively eliminate redundant information and establish soft connection. This process involves facilitating factor sharing for adjacency matrices reconstruction, which aims at exploring consensus and high-level correlations. Finally, a more powerful and comprehensive representation is achieved through the concatenation of these embeddings. Experimental results conducted on several graph benchmark datasets demonstrate its superiority in node classification tasks.
图神经网络(gnn)在复杂结构图的表示学习方面表现出强大的能力,在各种应用中表现出卓越的性能。现有的gnn大多利用图卷积运算,通过耦合的方式整合属性信息和结构信息。从优化的角度来看,这些gnn寻求学习一种共识和折衷的嵌入表示,以平衡属性和图信息,有选择地探索和保留有效信息。为了获得更全面的嵌入表示,引入了一种新的GNN框架,称为解耦图神经网络(DGNN)。DGNN通过解耦项分别从属性空间和图空间探索不同的嵌入表示。考虑到从属性特征空间衍生而来的语义图包含不同的节点连接信息,并对拓扑图进行了增强,DGNN将拓扑图和语义图集成在一起,实现了强大的嵌入表示学习。进一步,提高属性嵌入与图嵌入的结构一致性,有效消除冗余信息,建立软连接。这个过程包括促进邻接矩阵重建的因素共享,旨在探索共识和高水平的相关性。最后,通过这些嵌入的连接实现更强大、更全面的表示。在多个图基准数据集上的实验结果证明了该方法在节点分类任务中的优越性。
{"title":"DGNN: Decoupled Graph Neural Networks With Structural Consistency Between Attribute and Graph Embedding Representations","authors":"Jinlu Wang;Jipeng Guo;Yanfeng Sun;Junbin Gao;Shaofan Wang;Yachao Yang;Baocai Yin","doi":"10.1109/TBDATA.2024.3489420","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489420","url":null,"abstract":"Graph neural networks (GNNs) exhibit a robust capability for representation learning on graphs with complex structures, demonstrating superior performance across various applications. Most existing GNNs utilize graph convolution operations that integrate both attribute and structural information through coupled way. And these GNNs, from an optimization perspective, seek to learn a consensus and compromised embedding representation that balances attribute and graph information, selectively exploring and retaining valid information in essence. To obtain a more comprehensive embedding representation, a novel GNN framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced. DGNN separately explores distinctive embedding representations from the attribute and graph spaces by decoupled terms. Considering that the semantic graph, derived from attribute feature space, contains different node connection information and provides enhancement for the topological graph, both topological and semantic graphs are integrated by DGNN for powerful embedding representation learning. Further, structural consistency between the attribute embedding and the graph embedding is promoted to effectively eliminate redundant information and establish soft connection. This process involves facilitating factor sharing for adjacency matrices reconstruction, which aims at exploring consensus and high-level correlations. Finally, a more powerful and comprehensive representation is achieved through the concatenation of these embeddings. Experimental results conducted on several graph benchmark datasets demonstrate its superiority in node classification tasks.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1813-1827"},"PeriodicalIF":7.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reward Shaping Based on Optimal-Policy-Free 基于Optimal-Policy-Free的奖励塑造
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-31 DOI: 10.1109/TBDATA.2024.3489415
Jianghui Sang;Yongli Wang;Zaki Ahmad Khan;Xiaoliang Zhou
Existing research on potential-based reward shaping (PBRS) relies on optimal policy in Markov decision process (MDP) where optimal policy is regarded as the ground truth. However, in some practical application scenarios, there is an extrapolation error challenge between the computed optimal policy and the real-world optimal policy. At this time, the optimal policy is unreliable. To address this challenge, we design a Reward Shaping based on Optimal-Policy-Free to get rid of the dependence on the optimal policy. We view reinforcement learning as probabilistic inference on a directed graph. Essentially, this inference propagates information from the rewarding states in the MDP and results in a function which is leveraged as a potential function for PBRS. Our approach utilizes a contrastive learning technique on directed graph Laplacian. Here, this technique does not change the structure of the directed graph. Then, the directed graph Laplacian is used to approximate the true state transition matrix in MDP. The potential function in PBRS can be learned through the message passing mechanism which is built on this directed graph Laplacian. The experiments on Atari, MuJoCo and MiniWorld show that our approach outperforms the competitive algorithms.
现有的基于电位的奖励形成(PBRS)研究依赖于马尔可夫决策过程(MDP)中的最优策略,其中最优策略被视为基本真理。然而,在一些实际应用场景中,计算的最优策略与现实世界的最优策略之间存在外推误差挑战。此时,最优策略是不可靠的。为了解决这一问题,我们设计了一种基于optimal - policy - free的奖励塑造方法,以摆脱对最优策略的依赖。我们把强化学习看作是有向图上的概率推理。从本质上讲,这种推断从MDP中的奖励状态传播信息,并产生一个函数,该函数被用作PBRS的潜在函数。我们的方法利用了有向图拉普拉斯的对比学习技术。在这里,这种技术不会改变有向图的结构。然后,利用有向图拉普拉斯算子逼近MDP中的真态转移矩阵。PBRS中的势函数可以通过建立在有向图拉普拉斯算子上的消息传递机制来学习。在Atari, MuJoCo和MiniWorld上的实验表明,我们的方法优于竞争算法。
{"title":"Reward Shaping Based on Optimal-Policy-Free","authors":"Jianghui Sang;Yongli Wang;Zaki Ahmad Khan;Xiaoliang Zhou","doi":"10.1109/TBDATA.2024.3489415","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3489415","url":null,"abstract":"Existing research on potential-based reward shaping (PBRS) relies on optimal policy in Markov decision process (MDP) where optimal policy is regarded as the ground truth. However, in some practical application scenarios, there is an extrapolation error challenge between the computed optimal policy and the real-world optimal policy. At this time, the optimal policy is unreliable. To address this challenge, we design a Reward Shaping based on Optimal-Policy-Free to get rid of the dependence on the optimal policy. We view reinforcement learning as probabilistic inference on a directed graph. Essentially, this inference propagates information from the rewarding states in the MDP and results in a function which is leveraged as a potential function for PBRS. Our approach utilizes a contrastive learning technique on directed graph Laplacian. Here, this technique does not change the structure of the directed graph. Then, the directed graph Laplacian is used to approximate the true state transition matrix in MDP. The potential function in PBRS can be learned through the message passing mechanism which is built on this directed graph Laplacian. The experiments on Atari, MuJoCo and MiniWorld show that our approach outperforms the competitive algorithms.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 4","pages":"1787-1798"},"PeriodicalIF":7.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1