首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
Securing Multi-Source Domain Adaptation With Global and Domain-Wise Privacy Demands 确保多源网域适应性,满足全球和网域隐私需求
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1109/TKDE.2024.3459890
Shuwen Chai;Yutang Xiao;Feng Liu;Jian Zhu;Yuan Zhou
Making available a large size of training data for deep learning models and preserving data privacy are two ever-growing concerns in the machine learning community. Multi-source domain adaptation (MDA) leverages the data information from different domains and aggregates them to improve the performance in the target task, while the privacy leakage risk of publishing models under malicious attacker for membership or attribute inference is even more complicated than the one faced by single-source domain adaptation. In this paper, we tackle the problem of effectively protecting data privacy while training and aggregating multi-source information, where each source domain enjoys an independent privacy budget. Specifically, we develop a differentially private MDA (DPMDA) algorithm to provide domain-wise privacy protection with adaptive weighting scheme based on task similarity and task-specific privacy budget. We evaluate our algorithm on three benchmark tasks and show that DPMDA can effectively leverage different private budgets from source domains and consistently outperforms the existing private baselines with a reasonable gap with non-private state-of-the-art.
为深度学习模型提供大量训练数据和保护数据隐私是机器学习界日益关注的两个问题。多源域适应(MDA)利用来自不同领域的数据信息并将其聚合起来以提高目标任务的性能,而在恶意攻击者的攻击下发布模型进行成员或属性推断所面临的隐私泄露风险比单源域适应所面临的风险更加复杂。在本文中,我们将解决在训练和聚合多源信息时有效保护数据隐私的问题,其中每个源域都享有独立的隐私预算。具体来说,我们开发了一种差异化隐私 MDA(DPMDA)算法,通过基于任务相似性和任务特定隐私预算的自适应加权方案提供领域隐私保护。我们在三个基准任务上评估了我们的算法,结果表明 DPMDA 可以有效利用源域的不同隐私预算,其性能始终优于现有的隐私基线,与最先进的非隐私算法有合理的差距。
{"title":"Securing Multi-Source Domain Adaptation With Global and Domain-Wise Privacy Demands","authors":"Shuwen Chai;Yutang Xiao;Feng Liu;Jian Zhu;Yuan Zhou","doi":"10.1109/TKDE.2024.3459890","DOIUrl":"10.1109/TKDE.2024.3459890","url":null,"abstract":"Making available a large size of training data for deep learning models and preserving data privacy are two ever-growing concerns in the machine learning community. \u0000<italic>Multi-source domain adaptation</i>\u0000 (MDA) leverages the data information from different domains and aggregates them to improve the performance in the target task, while the privacy leakage risk of publishing models under malicious attacker for membership or attribute inference is even more complicated than the one faced by single-source domain adaptation. In this paper, we tackle the problem of effectively protecting data privacy while training and aggregating multi-source information, where each source domain enjoys an independent privacy budget. Specifically, we develop a \u0000<italic>differentially private MDA</i>\u0000 (DPMDA) algorithm to provide domain-wise privacy protection with adaptive weighting scheme based on task similarity and task-specific privacy budget. We evaluate our algorithm on three benchmark tasks and show that DPMDA can effectively leverage different private budgets from source domains and consistently outperforms the existing private baselines with a reasonable gap with non-private state-of-the-art.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9235-9248"},"PeriodicalIF":8.9,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LogoRA: Local-Global Representation Alignment for Robust Time Series Classification LogoRA:鲁棒性时间序列分类的局部-全局表征对齐
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1109/TKDE.2024.3459908
Huanyu Zhang;Yi-Fan Zhang;Zhang Zhang;Qingsong Wen;Liang Wang
Unsupervised domain adaptation (UDA) of time series aims to teach models to identify consistent patterns across various temporal scenarios, disregarding domain-specific differences, which can maintain their predictive accuracy and effectively adapt to new domains. However, existing UDA methods struggle to adequately extract and align both global and local features in time series data. To address this issue, we propose the Local-Global Representation Alignment framework (LogoRA), which employs a two-branch encoder–comprising a multi-scale convolutional branch and a patching transformer branch. The encoder enables the extraction of both local and global representations from time series. A fusion module is then introduced to integrate these representations, enhancing domain-invariant feature alignment from multi-scale perspectives. To achieve effective alignment, LogoRA employs strategies like invariant feature learning on the source domain, utilizing triplet loss for fine alignment and dynamic time warping-based feature alignment. Additionally, it reduces source-target domain gaps through adversarial training and per-class prototype alignment. Our evaluations on four time-series datasets demonstrate that LogoRA outperforms strong baselines by up to 12.52%, showcasing its superiority in time series UDA tasks.
时间序列的无监督领域适应(UDA)旨在教会模型识别各种时间场景中的一致模式,忽略特定领域的差异,从而保持预测准确性并有效适应新领域。然而,现有的 UDA 方法难以充分提取和调整时间序列数据中的全局和局部特征。为了解决这个问题,我们提出了局部-全局表征对齐框架(LogoRA),该框架采用双分支编码器,包括一个多尺度卷积分支和一个修补变换器分支。编码器可以从时间序列中提取局部和全局表示。然后引入一个融合模块来整合这些表征,从多尺度的角度加强域不变的特征配准。为了实现有效的配准,LogoRA 采用了多种策略,如源域不变特征学习、利用三重损失进行精细配准和基于动态时间扭曲的特征配准。此外,它还通过对抗训练和按类原型配准来减少源-目标域差距。我们在四个时间序列数据集上进行的评估表明,LogoRA 的性能比强基线高出 12.52%,显示了它在时间序列 UDA 任务中的优越性。
{"title":"LogoRA: Local-Global Representation Alignment for Robust Time Series Classification","authors":"Huanyu Zhang;Yi-Fan Zhang;Zhang Zhang;Qingsong Wen;Liang Wang","doi":"10.1109/TKDE.2024.3459908","DOIUrl":"10.1109/TKDE.2024.3459908","url":null,"abstract":"Unsupervised domain adaptation (UDA) of time series aims to teach models to identify consistent patterns across various temporal scenarios, disregarding domain-specific differences, which can maintain their predictive accuracy and effectively adapt to new domains. However, existing UDA methods struggle to adequately extract and align both global and local features in time series data. To address this issue, we propose the \u0000<bold>Lo</b>\u0000cal-\u0000<bold>G</b>\u0000l\u0000<bold>o</b>\u0000bal \u0000<bold>R</b>\u0000epresentation \u0000<bold>A</b>\u0000lignment framework (LogoRA), which employs a two-branch encoder–comprising a multi-scale convolutional branch and a patching transformer branch. The encoder enables the extraction of both local and global representations from time series. A fusion module is then introduced to integrate these representations, enhancing domain-invariant feature alignment from multi-scale perspectives. To achieve effective alignment, LogoRA employs strategies like invariant feature learning on the source domain, utilizing triplet loss for fine alignment and dynamic time warping-based feature alignment. Additionally, it reduces source-target domain gaps through adversarial training and per-class prototype alignment. Our evaluations on four time-series datasets demonstrate that LogoRA outperforms strong baselines by up to 12.52%, showcasing its superiority in time series UDA tasks.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8718-8729"},"PeriodicalIF":8.9,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diff-RNTraj: A Structure-Aware Diffusion Model for Road Network-Constrained Trajectory Generation Diff-RNTraj:用于道路网络受限轨迹生成的结构感知扩散模型
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1109/TKDE.2024.3460051
Tonglong Wei;Youfang Lin;Shengnan Guo;Yan Lin;Yiheng Huang;Chenyang Xiang;Yuqing Bai;Huaiyu Wan
Trajectory data is essential for various applications. However, publicly available trajectory datasets remain limited in scale due to privacy concerns, which hinders the development of trajectory mining and applications. Although some trajectory generation methods have been proposed to expand dataset scale, they generate trajectories in the geographical coordinate system, posing two limitations for practical applications: 1) failing to ensure that the generated trajectories are road-constrained. 2) lacking road-related information. In this paper, we propose a new problem, road network-constrained trajectory (RNTraj) generation, which can directly generate trajectories on the road network with road-related information. Specifically, RNTraj is a hybrid type of data, in which each point is represented by a discrete road segment and a continuous moving rate. To generate RNTraj, we design a diffusion model called Diff-RNTraj, which can effectively handle the hybrid RNTraj using a continuous diffusion framework by incorporating a pre-training strategy to embed hybrid RNTraj into continuous representations. During the sampling stage, a RNTraj decoder is designed to map the continuous representation generated by the diffusion model back to the hybrid RNTraj format. Furthermore, Diff-RNTraj introduces a novel loss function to enhance trajectory’s spatial validity. Extensive experiments conducted on two datasets demonstrate the effectiveness of Diff-RNTraj.
轨迹数据对各种应用都至关重要。然而,由于隐私问题,公开可用的轨迹数据集规模仍然有限,这阻碍了轨迹挖掘和应用的发展。虽然有人提出了一些轨迹生成方法来扩大数据集规模,但这些方法都是在地理坐标系下生成轨迹,在实际应用中存在两个局限:1) 无法确保生成的轨迹受道路限制。2)缺乏道路相关信息。在本文中,我们提出了一个新问题--道路网络约束轨迹(RNTraj)生成,它可以直接在道路网络上生成具有道路相关信息的轨迹。具体来说,RNTraj 是一种混合型数据,其中每个点都由离散的路段和连续的移动速率表示。为了生成 RNTraj,我们设计了一个名为 Diff-RNTraj 的扩散模型,通过预训练策略将混合 RNTraj 嵌入连续表征中,从而利用连续扩散框架有效处理混合 RNTraj。在采样阶段,设计了一个 RNTraj 解码器,将扩散模型生成的连续表示映射回混合 RNTraj 格式。此外,Diff-RNTraj 还引入了一种新的损失函数,以增强轨迹的空间有效性。在两个数据集上进行的大量实验证明了 Diff-RNTraj 的有效性。
{"title":"Diff-RNTraj: A Structure-Aware Diffusion Model for Road Network-Constrained Trajectory Generation","authors":"Tonglong Wei;Youfang Lin;Shengnan Guo;Yan Lin;Yiheng Huang;Chenyang Xiang;Yuqing Bai;Huaiyu Wan","doi":"10.1109/TKDE.2024.3460051","DOIUrl":"10.1109/TKDE.2024.3460051","url":null,"abstract":"Trajectory data is essential for various applications. However, publicly available trajectory datasets remain limited in scale due to privacy concerns, which hinders the development of trajectory mining and applications. Although some trajectory generation methods have been proposed to expand dataset scale, they generate trajectories in the geographical coordinate system, posing two limitations for practical applications: 1) failing to ensure that the generated trajectories are road-constrained. 2) lacking road-related information. In this paper, we propose a new problem, road network-constrained trajectory (RNTraj) generation, which can directly generate trajectories on the road network with road-related information. Specifically, RNTraj is a hybrid type of data, in which each point is represented by a discrete road segment and a continuous moving rate. To generate RNTraj, we design a diffusion model called Diff-RNTraj, which can effectively handle the hybrid RNTraj using a continuous diffusion framework by incorporating a pre-training strategy to embed hybrid RNTraj into continuous representations. During the sampling stage, a RNTraj decoder is designed to map the continuous representation generated by the diffusion model back to the hybrid RNTraj format. Furthermore, Diff-RNTraj introduces a novel loss function to enhance trajectory’s spatial validity. Extensive experiments conducted on two datasets demonstrate the effectiveness of Diff-RNTraj.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7940-7953"},"PeriodicalIF":8.9,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining User Consistent and Robust Preference for Unified Cross Domain Recommendation 挖掘用户一致且稳健的偏好,实现统一的跨领域推荐
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 DOI: 10.1109/TKDE.2024.3446581
Xiaolin Zheng;Weiming Liu;Chaochao Chen;Jiajie Su;Xinting Liao;Mengling Hu;Yanchao Tan
Cross-Domain Recommendation has been popularly studied to resolve data sparsity problem via leveraging knowledge transfer across different domains. In this paper, we focus on the Unified Cross-Domain Recommendation (Unified CDR) problem. That is, how to enhance the recommendation performance within and cross domains when users are partially overlapped. It has two main challenges, i.e., 1) how to obtain robust matching solution among the whole users and 2) how to exploit consistent and accurate results across domains. To address these two challenges, we propose MUCRP, a cross-domain recommendation framework for the Unified CDR problem. MUCRP contains three modules, i.e., variational rating reconstruction module, robust variational embedding alignment module, and cycle-consistent preference extraction module. To solve the first challenge, we propose fused Gromov-Wasserstein distribution co-clustering optimal transport to obtain more robust matching solution via considering both semantic and structure information. To tackle the second challenge, we propose embedding-consistent and prediction-consistent losses via dual autoencoder framework to achieve consistent results. Our empirical study on Douban and Amazon datasets demonstrates that MUCRP significantly outperforms the state-of-the-art models.
跨领域推荐(Cross-Domain Recommendation)是通过利用不同领域间的知识转移来解决数据稀缺问题的热门研究课题。本文重点研究统一跨域推荐(Unified CDR)问题。即当用户部分重叠时,如何提高域内和跨域的推荐性能。它有两个主要挑战,即:1)如何在所有用户中获得稳健的匹配解决方案;2)如何利用跨域的一致且准确的结果。为了解决这两个难题,我们提出了针对统一 CDR 问题的跨域推荐框架 MUCRP。MUCRP 包含三个模块,即变分评级重构模块、鲁棒性变分嵌入对齐模块和周期一致性偏好提取模块。为了解决第一个挑战,我们提出了融合格罗莫夫-瓦瑟斯坦分布共聚最优传输,通过同时考虑语义和结构信息来获得更稳健的匹配解决方案。为解决第二个难题,我们通过双自动编码器框架提出了嵌入一致性损失和预测一致性损失,以实现一致的结果。我们在豆瓣和亚马逊数据集上进行的实证研究表明,MUCRP 的性能明显优于最先进的模型。
{"title":"Mining User Consistent and Robust Preference for Unified Cross Domain Recommendation","authors":"Xiaolin Zheng;Weiming Liu;Chaochao Chen;Jiajie Su;Xinting Liao;Mengling Hu;Yanchao Tan","doi":"10.1109/TKDE.2024.3446581","DOIUrl":"10.1109/TKDE.2024.3446581","url":null,"abstract":"Cross-Domain Recommendation has been popularly studied to resolve data sparsity problem via leveraging knowledge transfer across different domains. In this paper, we focus on the \u0000<italic>Unified Cross-Domain Recommendation</i>\u0000 (\u0000<italic>Unified CDR</i>\u0000) problem. That is, how to enhance the recommendation performance within and cross domains when users are partially overlapped. It has two main challenges, i.e., 1) how to obtain robust matching solution among the whole users and 2) how to exploit consistent and accurate results across domains. To address these two challenges, we propose \u0000<monospace>MUCRP</monospace>\u0000, a cross-domain recommendation framework for the Unified CDR problem. \u0000<monospace>MUCRP</monospace>\u0000 contains three modules, i.e., variational rating reconstruction module, robust variational embedding alignment module, and cycle-consistent preference extraction module. To solve the first challenge, we propose fused Gromov-Wasserstein distribution co-clustering optimal transport to obtain more robust matching solution via considering both semantic and structure information. To tackle the second challenge, we propose embedding-consistent and prediction-consistent losses via dual autoencoder framework to achieve consistent results. Our empirical study on Douban and Amazon datasets demonstrates that \u0000<monospace>MUCRP</monospace>\u0000 significantly outperforms the state-of-the-art models.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8758-8772"},"PeriodicalIF":8.9,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining Triangle-Dense Subgraphs of a Fixed Size: Hardness, Lovasz extension and ´ Applications 挖掘固定大小的三角形密集子图: 难度、洛瓦兹扩展和 ´ 应用
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 DOI: 10.1109/tkde.2024.3444608
Aritra Konar, Nicholas D. Sidiropoulos
{"title":"Mining Triangle-Dense Subgraphs of a Fixed Size: Hardness, Lovasz extension and ´ Applications","authors":"Aritra Konar, Nicholas D. Sidiropoulos","doi":"10.1109/tkde.2024.3444608","DOIUrl":"https://doi.org/10.1109/tkde.2024.3444608","url":null,"abstract":"","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":""},"PeriodicalIF":8.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RTOD: Efficient Outlier Detection With Ray Tracing Cores RTOD:利用光线跟踪内核进行高效异常点检测
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1109/TKDE.2024.3453901
Ziming Wang;Kai Zhang;Yangming Lv;Yinglong Wang;Zhigang Zhao;Zhenying He;Yinan Jing;X. Sean Wang
Outlier detection in data streams is a critical component in numerous applications, such as network intrusion detection, financial fraud detection, and public health. To detect abnormal behaviors in real-time, these applications generally have stringent requirements for the performance of outlier detection. This paper proposes RTOD, a high-performance outlier detection approach that utilizes RT cores in modern GPUs for acceleration. RTOD transforms distance-based outlier detection in data streams into an efficient ray tracing job. By creating spheres centered at points within a window and casting rays from each point, RTOD identifies the outlier points according to the number of intersections between rays and spheres. Besides, we propose two optimization techniques, namely Grid Filtering and Ray-BVH Inversion, to further accelerate the detection efficiency of RT cores. Experimental results show that RTOD achieves up to 9.9× speedups over existing start-of-the-art outlier detection algorithms.
数据流中的异常值检测是网络入侵检测、金融欺诈检测和公共卫生等众多应用中的关键组成部分。为了实时检测异常行为,这些应用通常对异常值检测的性能有严格要求。本文提出的 RTOD 是一种高性能离群点检测方法,它利用现代 GPU 中的 RT 内核进行加速。RTOD 将数据流中基于距离的异常点检测转化为高效的光线追踪工作。RTOD 以窗口内的点为中心创建球体,并从每个点投射光线,然后根据光线与球体之间的交点数量识别离群点。此外,我们还提出了两种优化技术,即网格过滤和射线-BVH反转,以进一步提高 RT 内核的检测效率。实验结果表明,与现有的最先进离群点检测算法相比,RTOD 的速度提高了 9.9 倍。
{"title":"RTOD: Efficient Outlier Detection With Ray Tracing Cores","authors":"Ziming Wang;Kai Zhang;Yangming Lv;Yinglong Wang;Zhigang Zhao;Zhenying He;Yinan Jing;X. Sean Wang","doi":"10.1109/TKDE.2024.3453901","DOIUrl":"10.1109/TKDE.2024.3453901","url":null,"abstract":"Outlier detection in data streams is a critical component in numerous applications, such as network intrusion detection, financial fraud detection, and public health. To detect abnormal behaviors in real-time, these applications generally have stringent requirements for the performance of outlier detection. This paper proposes RTOD, a high-performance outlier detection approach that utilizes RT cores in modern GPUs for acceleration. RTOD transforms distance-based outlier detection in data streams into an efficient ray tracing job. By creating spheres centered at points within a window and casting rays from each point, RTOD identifies the outlier points according to the number of intersections between rays and spheres. Besides, we propose two optimization techniques, namely Grid Filtering and Ray-BVH Inversion, to further accelerate the detection efficiency of RT cores. Experimental results show that RTOD achieves up to 9.9× speedups over existing start-of-the-art outlier detection algorithms.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9192-9204"},"PeriodicalIF":8.9,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Regional Fraud Detection via Continual Learning With Knowledge Transfer 通过知识转移持续学习进行跨地区欺诈检测
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 DOI: 10.1109/TKDE.2024.3451161
Yujie Li;Xin Yang;Qiang Gao;Hao Wang;Junbo Zhang;Tianrui Li
Fraud detection poses a fundamental yet challenging problem to mitigate various risks associated with fraudulent activities. However, existing methods are limited by their reliance on static data within single geographical regions, thereby restricting the trained model’s adaptability across different regions. Practically, when enterprises expand their business into new cities or countries, training a new model from scratch can incur high computational costs and lead to catastrophic forgetting (CF). To address these limitations, we propose cross-regional fraud detection as an incremental learning problem, enabling the development of a unified model capable of adapting across diverse regions without suffering from CF. Subsequently, we introduce Cross-Regional Continual Learning (CCL), a novel paradigm that facilitates knowledge transfer and maintains performance when incrementally training models from previously learned regions to new ones. Specifically, CCL utilizes prototype-based knowledge replay for effective knowledge transfer while implementing a parameter smoothing mechanism to alleviate forgetting. Furthermore, we construct heterogeneous trade graphs (HTGs) and leverage graph-based backbones to enhance knowledge representation and facilitate knowledge transfer by uncovering intricate semantics inherent in cross-regional datasets. Extensive experiments demonstrate the superiority of our proposed method over baseline approaches and its substantial improvement in cross-regional fraud detection performance.
欺诈检测是降低与欺诈活动相关的各种风险的一个基本而又具有挑战性的问题。然而,现有方法由于依赖于单一地理区域内的静态数据而受到限制,从而限制了训练模型在不同区域的适应性。实际上,当企业将业务扩展到新的城市或国家时,从头开始训练一个新模型可能会产生高昂的计算成本,并导致灾难性遗忘(CF)。为了解决这些局限性,我们提出将跨地区欺诈检测作为一个增量学习问题,从而开发出一种能够适应不同地区而又不受灾难性遗忘影响的统一模型。随后,我们引入了跨区域持续学习(CCL),这是一种新颖的范式,可促进知识转移,并在从以前学习过的区域向新区域增量训练模型时保持性能。具体来说,CCL 利用基于原型的知识重放来实现有效的知识转移,同时实施参数平滑机制来减轻遗忘。此外,我们还构建了异构贸易图(HTGs),并利用基于图的骨干来增强知识表示,通过发掘跨区域数据集中固有的复杂语义来促进知识转移。广泛的实验证明了我们提出的方法优于基线方法,并大大提高了跨地区欺诈检测性能。
{"title":"Cross-Regional Fraud Detection via Continual Learning With Knowledge Transfer","authors":"Yujie Li;Xin Yang;Qiang Gao;Hao Wang;Junbo Zhang;Tianrui Li","doi":"10.1109/TKDE.2024.3451161","DOIUrl":"10.1109/TKDE.2024.3451161","url":null,"abstract":"Fraud detection poses a fundamental yet challenging problem to mitigate various risks associated with fraudulent activities. However, existing methods are limited by their reliance on static data within single geographical regions, thereby restricting the trained model’s adaptability across different regions. Practically, when enterprises expand their business into new cities or countries, training a new model from scratch can incur high computational costs and lead to catastrophic forgetting (CF). To address these limitations, we propose cross-regional fraud detection as an incremental learning problem, enabling the development of a unified model capable of adapting across diverse regions without suffering from CF. Subsequently, we introduce Cross-Regional Continual Learning (CCL), a novel paradigm that facilitates knowledge transfer and maintains performance when incrementally training models from previously learned regions to new ones. Specifically, CCL utilizes prototype-based knowledge replay for effective knowledge transfer while implementing a parameter smoothing mechanism to alleviate forgetting. Furthermore, we construct heterogeneous trade graphs (HTGs) and leverage graph-based backbones to enhance knowledge representation and facilitate knowledge transfer by uncovering intricate semantics inherent in cross-regional datasets. Extensive experiments demonstrate the superiority of our proposed method over baseline approaches and its substantial improvement in cross-regional fraud detection performance.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"7865-7877"},"PeriodicalIF":8.9,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLBR: A Semi-Supervised Document Key Information Extraction via Pseudo-Labeling Bias Rectification PLBR:通过伪标签纠偏实现半监督文档关键信息提取
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1109/TKDE.2024.3443928
Pengcheng Guo;Yonghong Song;Boyu Wang;Jiaohao Liu;Qi Zhang
Document key information extraction (DKIE) methods often require a large number of labeled samples, imposing substantial annotation costs in practical scenarios. Fortunately, pseudo-labeling based semi-supervised learning (PSSL) algorithms provide an effective paradigm to alleviate the reliance on labeled data by leveraging unlabeled data. However, the main challenges for PSSL in DKIE tasks: 1) context dependency of DKIE results in incorrect pseudo-labels. 2) high intra-class variance and low inter-class variation on DKIE. To this end, this paper proposes a similarity matrix Pseudo-Label Bias Rectification (PLBR) semi-supervised method for DKIE tasks, which improves the quality of pseudo-labels on DKIE benchmarks with rare labels. More specifically, the Similarity Matrix Bias Rectification (SMBR) module is proposed to improve the quality of pseudo-labels, which utilizes the contextual information of DKIE data through the analysis of similarity between labeled and unlabeled data. Moreover, a dual branch adaptive alignment (DBAA) mechanism is designed to adaptively align intra-class variance and alleviate inter-class variation on DKIE benchmarks, which is composed of two adaptive alignment ways. One is the intra-class alignment branch, which is designed to adaptively align intra-class variance. The other one is the inter-class alignment branch, which is developed to adaptively alleviate inter-class variance changes on the representation level. Extensive experiment results on two benchmarks demonstrate that PLBR achieves state-of-the-art performance and its performance surpasses the previous SOTA by $2.11% sim 2.53%$, $2.09% sim 2.49%$ F1-score on FUNSD and CORD with rare labeled samples, respectively. Code will be open to the public.
文档关键信息提取(DKIE)方法通常需要大量的标注样本,在实际应用中会产生巨大的标注成本。幸运的是,基于伪标注的半监督学习(PSSL)算法提供了一种有效的范式,通过利用非标注数据来减轻对标注数据的依赖。然而,PSSL 在 DKIE 任务中面临的主要挑战有1) DKIE 的上下文依赖性会导致不正确的伪标签。2)DKIE 的类内差异大,类间差异小。为此,本文提出了一种针对 DKIE 任务的相似性矩阵伪标签偏差校正(PLBR)半监督方法,该方法可以提高具有稀有标签的 DKIE 基准上的伪标签质量。更具体地说,为提高伪标签的质量,提出了相似性矩阵偏差矫正(SMBR)模块,该模块通过分析标记数据和未标记数据之间的相似性,利用了 DKIE 数据的上下文信息。此外,还设计了一种双分支自适应配准(DBAA)机制,以自适应地配准 DKIE 基准上的类内差异并减轻类间差异。一种是类内对齐分支,旨在自适应地对齐类内差异。另一个是类间配准分支,旨在自适应地减轻表示层面上的类间差异变化。在两个基准上进行的大量实验结果表明,PLBR实现了最先进的性能,其性能在带有稀有标签样本的FUNSD和CORD上分别以2.11% sim 2.53%$、2.09% sim 2.49%$的F1-score超过了之前的SOTA。代码将对公众开放。
{"title":"PLBR: A Semi-Supervised Document Key Information Extraction via Pseudo-Labeling Bias Rectification","authors":"Pengcheng Guo;Yonghong Song;Boyu Wang;Jiaohao Liu;Qi Zhang","doi":"10.1109/TKDE.2024.3443928","DOIUrl":"10.1109/TKDE.2024.3443928","url":null,"abstract":"Document key information extraction (DKIE) methods often require a large number of labeled samples, imposing substantial annotation costs in practical scenarios. Fortunately, pseudo-labeling based semi-supervised learning (PSSL) algorithms provide an effective paradigm to alleviate the reliance on labeled data by leveraging unlabeled data. However, the main challenges for PSSL in DKIE tasks: 1) context dependency of DKIE results in incorrect pseudo-labels. 2) high intra-class variance and low inter-class variation on DKIE. To this end, this paper proposes a similarity matrix Pseudo-Label Bias Rectification (PLBR) semi-supervised method for DKIE tasks, which improves the quality of pseudo-labels on DKIE benchmarks with rare labels. More specifically, the Similarity Matrix Bias Rectification (SMBR) module is proposed to improve the quality of pseudo-labels, which utilizes the contextual information of DKIE data through the analysis of similarity between labeled and unlabeled data. Moreover, a dual branch adaptive alignment (DBAA) mechanism is designed to adaptively align intra-class variance and alleviate inter-class variation on DKIE benchmarks, which is composed of two adaptive alignment ways. One is the intra-class alignment branch, which is designed to adaptively align intra-class variance. The other one is the inter-class alignment branch, which is developed to adaptively alleviate inter-class variance changes on the representation level. Extensive experiment results on two benchmarks demonstrate that PLBR achieves state-of-the-art performance and its performance surpasses the previous SOTA by \u0000<inline-formula><tex-math>$2.11% sim 2.53%$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$2.09% sim 2.49%$</tex-math></inline-formula>\u0000 F1-score on FUNSD and CORD with rare labeled samples, respectively. Code will be open to the public.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9025-9036"},"PeriodicalIF":8.9,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TripleSurv: Triplet Time-Adaptive Coordinate Learning Approach for Survival Analysis TripleSurv:用于生存分析的三重时间自适应坐标学习方法
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1109/TKDE.2024.3450910
Liwen Zhang;Lianzhen Zhong;Fan Yang;Linglong Tang;Di Dong;Hui Hui;Jie Tian
A core challenge in survival analysis is to model the distribution of time-to-event data, where the event of interest may be a death, failure, or occurrence of a specific event. Previous studies have showed that ranking and maximum likelihood estimation loss functions are widely-used learning approaches for survival analysis. However, ranking loss only focus on the ranking of survival time and does not consider potential effect of samples’ exact survival time values. Furthermore, the maximum likelihood estimation is unbounded and easily subject to outliers (e.g., censored data), which may cause poor performance of modeling. To handle the complexities of learning process and exploit valuable survival time values, we propose a time-adaptive coordinate loss function, TripleSurv, to achieve adaptive adjustments by introducing the differences in the survival time between sample pairs into the ranking, which can encourage the model to quantitatively rank relative risk of pairs, ultimately enhancing the accuracy of predictions. Most importantly, the TripleSurv is proficient in quantifying the relative risk between samples by ranking ordering of pairs, and consider the time interval as a trade-off to calibrate the robustness of model over sample distribution. Our TripleSurv is evaluated on three real-world survival datasets and a public synthetic dataset. The results show that our method outperforms the state-of-the-art methods and exhibits good model performance and robustness on modeling various sophisticated data distributions with different censor rates.
生存分析的一个核心挑战是为时间到事件数据的分布建模,其中感兴趣的事件可能是死亡、失败或特定事件的发生。以往的研究表明,排序损失函数和最大似然估计损失函数是生存分析中广泛使用的学习方法。然而,排序损失只关注生存时间的排序,并不考虑样本确切生存时间值的潜在影响。此外,最大似然估计是无边界的,容易受到异常值(如删减数据)的影响,从而导致建模效果不佳。为了处理学习过程的复杂性并利用有价值的生存时间值,我们提出了一种时间自适应坐标损失函数 TripleSurv,通过将样本对之间生存时间的差异引入排序来实现自适应调整,从而促使模型对样本对的相对风险进行定量排序,最终提高预测的准确性。最重要的是,TripleSurv 能够通过对样本进行排序来量化样本间的相对风险,并将时间间隔作为权衡因素来校准模型对样本分布的稳健性。我们的 TripleSurv 在三个真实生存数据集和一个公共合成数据集上进行了评估。结果表明,我们的方法优于最先进的方法,在对具有不同删失率的各种复杂数据分布建模时,表现出良好的模型性能和鲁棒性。
{"title":"TripleSurv: Triplet Time-Adaptive Coordinate Learning Approach for Survival Analysis","authors":"Liwen Zhang;Lianzhen Zhong;Fan Yang;Linglong Tang;Di Dong;Hui Hui;Jie Tian","doi":"10.1109/TKDE.2024.3450910","DOIUrl":"10.1109/TKDE.2024.3450910","url":null,"abstract":"A core challenge in survival analysis is to model the distribution of time-to-event data, where the event of interest may be a death, failure, or occurrence of a specific event. Previous studies have showed that ranking and maximum likelihood estimation loss functions are widely-used learning approaches for survival analysis. However, ranking loss only focus on the ranking of survival time and does not consider potential effect of samples’ exact survival time values. Furthermore, the maximum likelihood estimation is unbounded and easily subject to outliers (e.g., censored data), which may cause poor performance of modeling. To handle the complexities of learning process and exploit valuable survival time values, we propose a time-adaptive coordinate loss function, TripleSurv, to achieve adaptive adjustments by introducing the differences in the survival time between sample pairs into the ranking, which can encourage the model to quantitatively rank relative risk of pairs, ultimately enhancing the accuracy of predictions. Most importantly, the TripleSurv is proficient in quantifying the relative risk between samples by ranking ordering of pairs, and consider the time interval as a trade-off to calibrate the robustness of model over sample distribution. Our TripleSurv is evaluated on three real-world survival datasets and a public synthetic dataset. The results show that our method outperforms the state-of-the-art methods and exhibits good model performance and robustness on modeling various sophisticated data distributions with different censor rates.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9464-9475"},"PeriodicalIF":8.9,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triple Factorization-Based SNLF Representation With Improved Momentum-Incorporated AGD: A Knowledge Transfer Approach 基于三因式分解的 SNLF 表征与改进的动量纳入 AGD:一种知识转移方法
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1109/TKDE.2024.3450469
Ming Li;Yan Song;Derui Ding;Ran Sun
Symmetric, high-dimensional and sparse (SHiDS) networks usually contain rich knowledge regarding various patterns. To adequately extract useful information from SHiDS networks, a novel biased triple factorization-based (TF) symmetric and non-negative latent factor (SNLF) model is put forward by utilizing the transfer learning (TL) method, namely biased TL-incorporated TF-SNLF (BT$^{2}$-SNLF) model. The proposed BT$^{2}$-SNLF model mainly includes the following four ideas: 1) the implicit knowledge of the auxiliary matrix in the ternary rating domain is transferred to the target matrix in the numerical rating domain, facilitating the feature extraction; 2) two linear bias vectors are considered into the objective function to discover the knowledge describing the individual entity-oriented effect; 3) an improved momentum-incorporated additive gradient descent algorithm is developed to speed up the model convergence as well as guarantee the non-negativity of target SHiDS networks; and 4) a rigorous proof is provided to show that, under the assumption that the objective function is $L$-smooth and $mu$-convex, when $tgeq t_{0}$, the algorithm begins to descend and it can find an $epsilon$-solution within $O(ln((1+frac{mu L}{L(1+mu )+8mu })/epsilon ))$. Experimental results on six datasets from real applications demonstrate the effectiveness of our proposed T$^{2}$-SNLF and BT$^{2}$-SNLF models.
对称、高维和稀疏(SHiDS)网络通常包含有关各种模式的丰富知识。为了从SHiDS网络中充分提取有用信息,本文利用迁移学习(TL)方法,提出了一种新颖的基于偏置三因式分解(TF)的对称非负潜因(SNLF)模型,即偏置TL-incorporated TF-SNLF(BT$^{2}$-SNLF)模型。所提出的 BT$^{2}$-SNLF 模型主要包括以下四个思想:1)将三元评级域中辅助矩阵的隐含知识转移到数值评级域中的目标矩阵,从而促进特征提取;2)在目标函数中考虑两个线性偏置向量,以发现描述面向个体实体效应的知识;3)开发了一种改进的动量融入加性梯度下降算法,以加快模型收敛速度并保证目标 SHiDS 网络的非负性;4)提供了一个严格的证明,表明在目标函数为$L$平滑且$mu$凸的假设下,当$tgeq t_{0}$时,算法开始下降,并能在$O(ln((1+rac{mu L}{L(1+mu )+8mu })/epsilon ))$ 内找到$epsilon$解。在六个真实应用数据集上的实验结果证明了我们提出的 T$^{2}$-SNLF 和 BT$^{2}$-SNLF 模型的有效性。
{"title":"Triple Factorization-Based SNLF Representation With Improved Momentum-Incorporated AGD: A Knowledge Transfer Approach","authors":"Ming Li;Yan Song;Derui Ding;Ran Sun","doi":"10.1109/TKDE.2024.3450469","DOIUrl":"10.1109/TKDE.2024.3450469","url":null,"abstract":"Symmetric, high-dimensional and sparse (SHiDS) networks usually contain rich knowledge regarding various patterns. To adequately extract useful information from SHiDS networks, a novel biased triple factorization-based (TF) symmetric and non-negative latent factor (SNLF) model is put forward by utilizing the transfer learning (TL) method, namely biased TL-incorporated TF-SNLF (BT\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000-SNLF) model. The proposed BT\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000-SNLF model mainly includes the following four ideas: 1) the implicit knowledge of the auxiliary matrix in the ternary rating domain is transferred to the target matrix in the numerical rating domain, facilitating the feature extraction; 2) two linear bias vectors are considered into the objective function to discover the knowledge describing the individual entity-oriented effect; 3) an improved momentum-incorporated additive gradient descent algorithm is developed to speed up the model convergence as well as guarantee the non-negativity of target SHiDS networks; and 4) a rigorous proof is provided to show that, under the assumption that the objective function is \u0000<inline-formula><tex-math>$L$</tex-math></inline-formula>\u0000-smooth and \u0000<inline-formula><tex-math>$mu$</tex-math></inline-formula>\u0000-convex, when \u0000<inline-formula><tex-math>$tgeq t_{0}$</tex-math></inline-formula>\u0000, the algorithm begins to descend and it can find an \u0000<inline-formula><tex-math>$epsilon$</tex-math></inline-formula>\u0000-solution within \u0000<inline-formula><tex-math>$O(ln((1+frac{mu L}{L(1+mu )+8mu })/epsilon ))$</tex-math></inline-formula>\u0000. Experimental results on six datasets from real applications demonstrate the effectiveness of our proposed T\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000-SNLF and BT\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000-SNLF models.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9448-9463"},"PeriodicalIF":8.9,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1