首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Sentiment Intensity Contrastive Text-Enhanced Fusion Network 情感强度对比文本增强融合网络
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-28 DOI: 10.1002/cpe.70593
Heng Jiang, Lianke Shi, Deyu Kong, Jiahao Hua, Honggui Shang, Lijia Chen

Multimodal sentiment analysis (MSA) has recently encountered two major challenges: non-textual modalities are often affected by noise, and sentiment intensity differences are difficult to capture. To address these issues, we propose a Sentiment Intensity Contrastive Text-Enhanced Fusion Network (SICTEF Net), which achieves deep collaboration among text, audio, and visual modalities through three key mechanisms. First, a grouped-channel-attention based Feature Enhancement Module (EMA) is designed to mitigate modality-specific noise and emphasize emotion-sensitive cues by combining spatial–channel interaction mapping with dual-branch attention fusion. Second, a text-centered cross-modal fusion mechanism is introduced, where bidirectional multi-head self-attention and a residual-enhanced encoder jointly enable complementary mappings between text and non-text modalities, thereby producing intermediate representations that preserve semantic primacy while incorporating fine-grained complementary information. Third, a sentiment-intensity weighted contrastive learning strategy dynamically assigns weights to positive and negative sample pairs according to their sentiment intensity differences, allowing the model to more precisely distinguish samples with varying degrees of similarity in the embedding space. Experimental evaluation on the CMU-MOSI and CMU-MOSEI datasets demonstrates that SICTEF Net consistently outperforms state-of-the-art baselines in binary accuracy, F1 score, seven-class accuracy, mean absolute error (MAE), and Pearson correlation. Comprehensive ablation studies further confirm the complementary benefits of EMA, the text-enhanced Transformer, and sentiment-intensity contrastive learning. These results indicate that combining text-driven deep interaction, non-text modality enhancement via channel attention, and contrastive learning can improve the accuracy and robustness of multimodal sentiment analysis.

多模态情感分析(MSA)近年来面临着两大挑战:非文本模态经常受到噪声的影响,情感强度差异难以捕捉。为了解决这些问题,我们提出了一种情感强度对比文本增强融合网络(SICTEF Net),该网络通过三个关键机制实现文本、音频和视觉模式之间的深度协作。首先,设计了基于分组通道注意力的特征增强模块(EMA),通过将空间通道交互映射与双分支注意力融合相结合,减轻模态特定噪声并强调情绪敏感线索。其次,引入了以文本为中心的跨模态融合机制,其中双向多头自关注和残差增强编码器共同实现了文本和非文本模态之间的互补映射,从而产生了在包含细粒度互补信息的同时保留语义首要性的中间表示。第三,采用情感强度加权对比学习策略,根据情感强度差异动态地为正、负样本对分配权重,使模型能够更精确地区分嵌入空间中不同相似度的样本。对CMU-MOSI和CMU-MOSEI数据集的实验评估表明,SICTEF Net在二进制精度、F1分数、七类精度、平均绝对误差(MAE)和Pearson相关性方面始终优于最先进的基线。综合消融研究进一步证实了EMA、文本增强的Transformer和情绪强度对比学习的互补益处。这些结果表明,结合文本驱动的深度交互、通过通道注意进行的非文本情态增强和对比学习可以提高多模态情感分析的准确性和鲁棒性。
{"title":"Sentiment Intensity Contrastive Text-Enhanced Fusion Network","authors":"Heng Jiang,&nbsp;Lianke Shi,&nbsp;Deyu Kong,&nbsp;Jiahao Hua,&nbsp;Honggui Shang,&nbsp;Lijia Chen","doi":"10.1002/cpe.70593","DOIUrl":"https://doi.org/10.1002/cpe.70593","url":null,"abstract":"<div>\u0000 \u0000 <p>Multimodal sentiment analysis (MSA) has recently encountered two major challenges: non-textual modalities are often affected by noise, and sentiment intensity differences are difficult to capture. To address these issues, we propose a Sentiment Intensity Contrastive Text-Enhanced Fusion Network (SICTEF Net), which achieves deep collaboration among text, audio, and visual modalities through three key mechanisms. First, a grouped-channel-attention based Feature Enhancement Module (EMA) is designed to mitigate modality-specific noise and emphasize emotion-sensitive cues by combining spatial–channel interaction mapping with dual-branch attention fusion. Second, a text-centered cross-modal fusion mechanism is introduced, where bidirectional multi-head self-attention and a residual-enhanced encoder jointly enable complementary mappings between text and non-text modalities, thereby producing intermediate representations that preserve semantic primacy while incorporating fine-grained complementary information. Third, a sentiment-intensity weighted contrastive learning strategy dynamically assigns weights to positive and negative sample pairs according to their sentiment intensity differences, allowing the model to more precisely distinguish samples with varying degrees of similarity in the embedding space. Experimental evaluation on the CMU-MOSI and CMU-MOSEI datasets demonstrates that SICTEF Net consistently outperforms state-of-the-art baselines in binary accuracy, F1 score, seven-class accuracy, mean absolute error (MAE), and Pearson correlation. Comprehensive ablation studies further confirm the complementary benefits of EMA, the text-enhanced Transformer, and sentiment-intensity contrastive learning. These results indicate that combining text-driven deep interaction, non-text modality enhancement via channel attention, and contrastive learning can improve the accuracy and robustness of multimodal sentiment analysis.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146155104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequence Recommendation for Mobile Application via Time Interval-Aware Attention and Contrastive Learning 基于时间间隔意识和对比学习的移动应用程序序列推荐
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-27 DOI: 10.1002/cpe.70585
Buqing Cao, Junyi Chen, Ziming Xie, Wenyu Zhao, Sheng Lin, Longxin Zhang

Mobile application recommendation has emerged as a pivotal domain within the realm of personalized recommendation systems. Traditional mobile application sequence recommendation approaches are predominantly dedicated to the pursuit of sophisticated sequence encoders to achieve more precise representations. However, existing sequence recommendation methods primarily consider the sequential order of historical App interactions, overlooking the time intervals between applications. This oversight hinders the model's capability to fully unearth the temporal correlations in user behavior, consequently limiting the accuracy and personalization of mobile application recommendations. Moreover, the interactions between users and mobile applications are typically sparse, which weakens the model's generalization capabilities. To address these issues, we propose a novel method for mobile application sequence recommendation, incorporating time interval-aware attention and contrastive learning (called Ti-CoRe). Specifically, this approach introduces a novel sequence augmentation strategy based on similarity replacement within a contrastive learning framework. By considering textual similarities between applications, this method selectively replaces applications that possess lower similarity scores to generate augmented sequences, increasing the diversity of the sample space and mitigating data sparsity. Furthermore, integrating a time interval-aware mechanism into the BERT4Rec model, the paper presents a new T-BERT encoder. It precisely assesses the influence of fluctuating time intervals on the prediction of the subsequent mobile application, thereby ensuring a more nuanced app representation. Experiments conducted on the 360APP real dataset demonstrate that Ti-CoRe consistently outperforms various baseline models in terms of NDCG and HR metrics.

移动应用程序推荐已经成为个性化推荐系统领域的一个关键领域。传统的移动应用程序序列推荐方法主要致力于追求复杂的序列编码器,以实现更精确的表示。然而,现有的顺序推荐方法主要考虑历史应用交互的顺序顺序,忽略了应用之间的时间间隔。这种疏忽阻碍了模型充分挖掘用户行为中的时间相关性的能力,从而限制了移动应用程序推荐的准确性和个性化。此外,用户与移动应用程序之间的交互通常是稀疏的,这削弱了模型的泛化能力。为了解决这些问题,我们提出了一种新的移动应用程序序列推荐方法,该方法结合了时间间隔感知注意和对比学习(称为Ti-CoRe)。具体来说,该方法在对比学习框架中引入了一种基于相似性替换的序列增强策略。该方法通过考虑应用程序之间的文本相似性,选择性地替换具有较低相似性分数的应用程序来生成增广序列,增加了样本空间的多样性,减轻了数据稀疏性。此外,将时间间隔感知机制集成到BERT4Rec模型中,提出了一种新的T-BERT编码器。它精确地评估波动时间间隔对后续移动应用程序预测的影响,从而确保更细致入微的应用程序表示。在360APP真实数据集上进行的实验表明,在NDCG和HR指标方面,Ti-CoRe始终优于各种基线模型。
{"title":"Sequence Recommendation for Mobile Application via Time Interval-Aware Attention and Contrastive Learning","authors":"Buqing Cao,&nbsp;Junyi Chen,&nbsp;Ziming Xie,&nbsp;Wenyu Zhao,&nbsp;Sheng Lin,&nbsp;Longxin Zhang","doi":"10.1002/cpe.70585","DOIUrl":"10.1002/cpe.70585","url":null,"abstract":"<div>\u0000 \u0000 <p>Mobile application recommendation has emerged as a pivotal domain within the realm of personalized recommendation systems. Traditional mobile application sequence recommendation approaches are predominantly dedicated to the pursuit of sophisticated sequence encoders to achieve more precise representations. However, existing sequence recommendation methods primarily consider the sequential order of historical App interactions, overlooking the time intervals between applications. This oversight hinders the model's capability to fully unearth the temporal correlations in user behavior, consequently limiting the accuracy and personalization of mobile application recommendations. Moreover, the interactions between users and mobile applications are typically sparse, which weakens the model's generalization capabilities. To address these issues, we propose a novel method for mobile application sequence recommendation, incorporating time interval-aware attention and contrastive learning (called Ti-CoRe). Specifically, this approach introduces a novel sequence augmentation strategy based on similarity replacement within a contrastive learning framework. By considering textual similarities between applications, this method selectively replaces applications that possess lower similarity scores to generate augmented sequences, increasing the diversity of the sample space and mitigating data sparsity. Furthermore, integrating a time interval-aware mechanism into the BERT4Rec model, the paper presents a new T-BERT encoder. It precisely assesses the influence of fluctuating time intervals on the prediction of the subsequent mobile application, thereby ensuring a more nuanced app representation. Experiments conducted on the 360APP real dataset demonstrate that Ti-CoRe consistently outperforms various baseline models in terms of NDCG and HR metrics.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optimized Explainable Cross-Attention Transformer With Separable Convolutions for Multimodal Chronic Kidney Disease Detection 一种优化的可分离卷积可解释交叉注意转换器用于多模式慢性肾脏疾病的检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-27 DOI: 10.1002/cpe.70567
B. Guruprakash, K. Ramya, M. M. Yamuna Devi, R. Suguna Devi

One of the life-threatening health conditions that affects millions of people across the globe is chronic kidney disease. Early detection and classification play an important role in treating and controlling the disease progression. The traditional healthcare system faces various challenges, like the risks such as progression of end-stage renal disease, high morbidity and mortality due to the increasing number of chronic kidney disease patients across the world. Millions of lives can be saved with early detection and proper treatment. This study proposes a novel optimized explainable cross attention transformer-based separable neural network model to detect and classify chronic kidney disease. In this model, the preprocessing approaches like image resizing, data augmentation, image normalization, missing data handling, data encoding, and data imputation are used to clean the data. Then, to choose the optimal attributes from the preprocessed data, a separable convolutional network and a transformer encoder are utilized. The softmax activation function and the fully connected layers in the classification layers perform the multiclass classification of this data. The interpretability and transparency of the model are improved using local interpretable model-agnostic explanations, and the convergence rate and training are enhanced with the integration of stochastic gradient descent optimizer. Two publicly accessible kindney disease-related datasets are used in this work to validate the model performance. The experiments conducted indicated that the proposed model attained the superior accuracy value of 98.45% and the lowest error rate of 1.55% and showed its superiority over the existing techniques.

慢性肾脏疾病是影响全球数百万人的危及生命的健康状况之一。早期发现和分类对治疗和控制疾病进展具有重要作用。随着世界范围内慢性肾病患者数量的不断增加,传统的医疗保健体系面临着各种挑战,如终末期肾病进展、高发病率和死亡率等风险。早期发现和适当治疗可以挽救数百万人的生命。本研究提出一种新的优化可解释交叉注意转换器的可分离神经网络模型,用于慢性肾脏疾病的检测和分类。在该模型中,使用图像调整大小、数据增强、图像归一化、缺失数据处理、数据编码和数据输入等预处理方法来清理数据。然后,利用可分离卷积网络和变压器编码器从预处理数据中选择最优属性。softmax激活函数和分类层中的全连通层对该数据进行多类分类。采用局部可解释的模型不可知解释提高了模型的可解释性和透明性,集成随机梯度下降优化器提高了收敛速度和训练速度。在这项工作中使用了两个可公开访问的肾脏疾病相关数据集来验证模型的性能。实验结果表明,该模型的准确率最高为98.45%,错误率最低为1.55%,与现有技术相比具有明显的优越性。
{"title":"An Optimized Explainable Cross-Attention Transformer With Separable Convolutions for Multimodal Chronic Kidney Disease Detection","authors":"B. Guruprakash,&nbsp;K. Ramya,&nbsp;M. M. Yamuna Devi,&nbsp;R. Suguna Devi","doi":"10.1002/cpe.70567","DOIUrl":"10.1002/cpe.70567","url":null,"abstract":"<div>\u0000 \u0000 <p>One of the life-threatening health conditions that affects millions of people across the globe is chronic kidney disease. Early detection and classification play an important role in treating and controlling the disease progression. The traditional healthcare system faces various challenges, like the risks such as progression of end-stage renal disease, high morbidity and mortality due to the increasing number of chronic kidney disease patients across the world. Millions of lives can be saved with early detection and proper treatment. This study proposes a novel optimized explainable cross attention transformer-based separable neural network model to detect and classify chronic kidney disease. In this model, the preprocessing approaches like image resizing, data augmentation, image normalization, missing data handling, data encoding, and data imputation are used to clean the data. Then, to choose the optimal attributes from the preprocessed data, a separable convolutional network and a transformer encoder are utilized. The softmax activation function and the fully connected layers in the classification layers perform the multiclass classification of this data. The interpretability and transparency of the model are improved using local interpretable model-agnostic explanations, and the convergence rate and training are enhanced with the integration of stochastic gradient descent optimizer. Two publicly accessible kindney disease-related datasets are used in this work to validate the model performance. The experiments conducted indicated that the proposed model attained the superior accuracy value of 98.45% and the lowest error rate of 1.55% and showed its superiority over the existing techniques.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA-Accelerated Real-Time Tennis Serving Robot With DSP-Efficient Convolutional Neural Network 基于dsp高效卷积神经网络的fpga加速实时网球发球机器人
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-27 DOI: 10.1002/cpe.70579
Tengfei Li, Shenshen Gu, Yulong Ren

Artificial intelligence hardware accelerators are gaining increasing importance in domains such as computer vision and robotics. However, deploying Convolutional Neural Networks (CNNs) on embedded systems with constrained resources and memory continues to pose a major challenge. Motivated by the requirements of robotic vision, this paper presents a DSP-Efficient Packing Strategy (DEPS) accelerator architecture tailored for lightweight CNNs, improving both computational throughput and hardware efficiency in real-time robotic applications. Unlike previous FPGA designs that underutilize DSP blocks, the proposed DEPS enables the parallel execution of twelve 3-bit multiplications within a single DSP48E2 unit. A layer-wise pipelined mapping scheme is also proposed, which directly maps each CNN layer onto hardware without intermediate buffering, ensuring continuous computation and minimizing latency. The proposed accelerator is incorporated into an intelligent tennis serving robot, serving as the real-time vision module for object detection. Experimental results from VGG7-tiny and UltraNet demonstrate throughputs of 299.4 GOPS and 340.0 GOPS, respectively, alongside power efficiencies of 80.1 GOPS/W and 89.2 GOPS/W. The robotic system deployment confirms that superior DSP utilization is achieved, enabling rapid, energy-efficient, and reliable perception. This work highlights the potential of the proposed design for application in resource-constrained edge platforms and practical robotics.

人工智能硬件加速器在计算机视觉和机器人等领域越来越重要。然而,在资源和内存受限的嵌入式系统上部署卷积神经网络(cnn)仍然是一个重大挑战。从机器人视觉的需求出发,提出了一种针对轻量级cnn的DSP-Efficient Packing Strategy (DEPS)加速器架构,提高了实时机器人应用中的计算吞吐量和硬件效率。与以前的FPGA设计不充分利用DSP块不同,提出的DEPS可以在单个DSP48E2单元内并行执行12个3位乘法。提出了一种分层的流水线映射方案,将CNN的每一层直接映射到硬件上,不需要中间缓冲,保证了连续计算,最小化了延迟。该加速器被集成到智能网球发球机器人中,作为物体检测的实时视觉模块。VGG7-tiny和UltraNet的实验结果表明,吞吐量分别为299.4 GOPS和340.0 GOPS,功率效率分别为80.1 GOPS/W和89.2 GOPS/W。机器人系统的部署证实了卓越的DSP利用率,实现了快速、节能和可靠的感知。这项工作强调了所提出的设计在资源受限边缘平台和实际机器人应用中的潜力。
{"title":"FPGA-Accelerated Real-Time Tennis Serving Robot With DSP-Efficient Convolutional Neural Network","authors":"Tengfei Li,&nbsp;Shenshen Gu,&nbsp;Yulong Ren","doi":"10.1002/cpe.70579","DOIUrl":"10.1002/cpe.70579","url":null,"abstract":"<p>Artificial intelligence hardware accelerators are gaining increasing importance in domains such as computer vision and robotics. However, deploying Convolutional Neural Networks (CNNs) on embedded systems with constrained resources and memory continues to pose a major challenge. Motivated by the requirements of robotic vision, this paper presents a DSP-Efficient Packing Strategy (DEPS) accelerator architecture tailored for lightweight CNNs, improving both computational throughput and hardware efficiency in real-time robotic applications. Unlike previous FPGA designs that underutilize DSP blocks, the proposed DEPS enables the parallel execution of twelve 3-bit multiplications within a single DSP48E2 unit. A layer-wise pipelined mapping scheme is also proposed, which directly maps each CNN layer onto hardware without intermediate buffering, ensuring continuous computation and minimizing latency. The proposed accelerator is incorporated into an intelligent tennis serving robot, serving as the real-time vision module for object detection. Experimental results from VGG7-tiny and UltraNet demonstrate throughputs of 299.4 GOPS and 340.0 GOPS, respectively, alongside power efficiencies of 80.1 GOPS/W and 89.2 GOPS/W. The robotic system deployment confirms that superior DSP utilization is achieved, enabling rapid, energy-efficient, and reliable perception. This work highlights the potential of the proposed design for application in resource-constrained edge platforms and practical robotics.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-Based Data Deduplication: Techniques, Challenges, and Future Directions 基于机器学习的重复数据删除:技术、挑战和未来方向
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-25 DOI: 10.1002/cpe.70574
Ravneet Kaur, Harcharan Jit Singh, Inderveer Chana

Data deduplication plays an important role in modern data management as it reduces storage costs and ensures consistency by eliminating redundant records. The traditional data deduplication methods are effective for exact matches but struggle with adaptability and detecting near-exact duplicate records in unstructured or complex data. Machine learning (ML) addresses these limitations by using pattern recognition, feature learning, and statistical modeling to identify subtle similarities between records. This review classifies ML-based deduplication techniques into supervised, unsupervised, semi-supervised, and deep learning methodologies. It also discusses key challenges, including class imbalance, model interpretability, and computational overhead. The paper also explores recent developments in federated learning, real-time deduplication, and multimodal techniques to highlight current trends in these areas. Finally, the paper identifies key open issues and proposes a unified perspective for scalable, real-time deduplication systems that can accommodate diverse data types, structures, and system requirements.

重复数据删除在现代数据管理中发挥着重要作用,它通过消除冗余记录来降低存储成本并确保一致性。传统的重复数据删除方法对精确匹配是有效的,但在非结构化或复杂数据中难以适应和检测接近精确的重复记录。机器学习(ML)通过使用模式识别、特征学习和统计建模来识别记录之间微妙的相似性,从而解决了这些限制。本文将基于ml的重复数据删除技术分为监督式、无监督式、半监督式和深度学习方法。它还讨论了关键挑战,包括类不平衡、模型可解释性和计算开销。本文还探讨了联邦学习、实时重复数据删除和多模态技术的最新发展,以突出这些领域的当前趋势。最后,本文确定了关键的开放问题,并提出了一个统一的视角,可扩展的,实时的重复数据删除系统,可以适应不同的数据类型,结构和系统需求。
{"title":"Machine Learning-Based Data Deduplication: Techniques, Challenges, and Future Directions","authors":"Ravneet Kaur,&nbsp;Harcharan Jit Singh,&nbsp;Inderveer Chana","doi":"10.1002/cpe.70574","DOIUrl":"10.1002/cpe.70574","url":null,"abstract":"<div>\u0000 \u0000 <p>Data deduplication plays an important role in modern data management as it reduces storage costs and ensures consistency by eliminating redundant records. The traditional data deduplication methods are effective for exact matches but struggle with adaptability and detecting near-exact duplicate records in unstructured or complex data. Machine learning (ML) addresses these limitations by using pattern recognition, feature learning, and statistical modeling to identify subtle similarities between records. This review classifies ML-based deduplication techniques into supervised, unsupervised, semi-supervised, and deep learning methodologies. It also discusses key challenges, including class imbalance, model interpretability, and computational overhead. The paper also explores recent developments in federated learning, real-time deduplication, and multimodal techniques to highlight current trends in these areas. Finally, the paper identifies key open issues and proposes a unified perspective for scalable, real-time deduplication systems that can accommodate diverse data types, structures, and system requirements.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Ensemble Machine Learning Method for Electric Vehicles Energy Consumption Rate Estimation 一种可解释的集成机器学习方法用于电动汽车能耗率估算
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70571
Mohammed Zaid Ghawy, Shuyan Chen, Sajan Shaikh, Aamir Hussain, Rajasekhar Balasubramanian, Yongfeng Ma

The rapid adoption of electric vehicles (EVs) highlights the need for intelligent systems to improve energy efficiency and optimize driving range. Since energy consumption and driving range modeling are closely related, understanding the energy consumption (EC) of EVs can provide essential insights to drivers and reduce “range anxiety.” Previous studies have relied on traditional analytical and statistical methods, which lack the representativeness of influential factors and the interpretability of the model applied in EC modeling. To address this issue, we propose an explainable ensemble machine learning model to predict EC of EVs, considering the most important features and the factors that exhibit greater influence on EC. The Spritmonitor public real-world dataset is used for this study. First, data preprocessing is conducted before feeding data into the ensemble method. Second, the Energy Consumption Rate (ECR) was predicted using Gradient Boosting Regression Trees (GBRT). The proposed predictive framework demonstrates superior prediction accuracy compared to baseline models. GBRT achieved the highest R 2 (1 and 0.99 for training and testing, respectively) and the lowest MAE (0.08) and RMSE (0.16) compared to other models, including XGBoost, LightGBM, and CatBoost. Finally, SHAP (Shapley Additive exPlanations) analysis was applied to explain the proposed model and identify the most influential dynamics factors, including driving range, capacity, speed, state of charge (SOC), ambient temperature, road type, driving style, air conditioning, and heating usage. The results suggest that the proposed framework can effectively enhance the prediction of the EC of EVs and facilitates the analyze driving factors, thereby supporting intelligent trip planning, adaptive energy-aware management in transportation systems and provide insightful feedback to drivers.

电动汽车的快速普及凸显了对智能系统的需求,以提高能源效率和优化续驶里程。由于能源消耗和续驶里程模型密切相关,因此了解电动汽车的能源消耗(EC)可以为驾驶员提供必要的见解,并减少“续驶里程焦虑”。以往的研究主要依靠传统的分析和统计方法,缺乏影响因素的代表性和模型的可解释性。为了解决这个问题,我们提出了一个可解释的集成机器学习模型来预测电动汽车的EC,考虑了最重要的特征和对EC影响较大的因素。本研究使用Spritmonitor公开的真实世界数据集。首先,在将数据输入集成方法之前进行数据预处理。其次,利用梯度增强回归树(GBRT)对能源消耗率(ECR)进行预测。与基线模型相比,所提出的预测框架具有更高的预测精度。与其他模型(包括XGBoost、LightGBM和CatBoost)相比,GBRT的r2最高(训练和测试分别为1和0.99),MAE最低(0.08),RMSE最低(0.16)。最后,采用Shapley加性解释(Shapley Additive exPlanations)分析对模型进行了解释,并确定了影响最大的动力学因素,包括续驶里程、容量、速度、充电状态(SOC)、环境温度、道路类型、驾驶方式、空调和供暖使用情况。结果表明,该框架能够有效增强电动汽车EC的预测能力,促进驱动因素分析,从而支持交通系统的智能出行规划、自适应能源意识管理,并为驾驶员提供有洞察力的反馈。
{"title":"An Explainable Ensemble Machine Learning Method for Electric Vehicles Energy Consumption Rate Estimation","authors":"Mohammed Zaid Ghawy,&nbsp;Shuyan Chen,&nbsp;Sajan Shaikh,&nbsp;Aamir Hussain,&nbsp;Rajasekhar Balasubramanian,&nbsp;Yongfeng Ma","doi":"10.1002/cpe.70571","DOIUrl":"10.1002/cpe.70571","url":null,"abstract":"<div>\u0000 \u0000 <p>The rapid adoption of electric vehicles (EVs) highlights the need for intelligent systems to improve energy efficiency and optimize driving range. Since energy consumption and driving range modeling are closely related, understanding the energy consumption (EC) of EVs can provide essential insights to drivers and reduce “range anxiety.” Previous studies have relied on traditional analytical and statistical methods, which lack the representativeness of influential factors and the interpretability of the model applied in EC modeling. To address this issue, we propose an explainable ensemble machine learning model to predict EC of EVs, considering the most important features and the factors that exhibit greater influence on EC. The Spritmonitor public real-world dataset is used for this study. First, data preprocessing is conducted before feeding data into the ensemble method. Second, the Energy Consumption Rate (ECR) was predicted using Gradient Boosting Regression Trees (GBRT). The proposed predictive framework demonstrates superior prediction accuracy compared to baseline models. GBRT achieved the highest <i>R</i>\u0000 <sup>2</sup> (1 and 0.99 for training and testing, respectively) and the lowest MAE (0.08) and RMSE (0.16) compared to other models, including XGBoost, LightGBM, and CatBoost. Finally, SHAP (Shapley Additive exPlanations) analysis was applied to explain the proposed model and identify the most influential dynamics factors, including driving range, capacity, speed, state of charge (SOC), ambient temperature, road type, driving style, air conditioning, and heating usage. The results suggest that the proposed framework can effectively enhance the prediction of the EC of EVs and facilitates the analyze driving factors, thereby supporting intelligent trip planning, adaptive energy-aware management in transportation systems and provide insightful feedback to drivers.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Carbon Emission Prediction for Gas Power Plants Based on Deep Learning Under Small-Sample Conditions 小样本条件下基于深度学习的燃气电厂碳排放预测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70591
Xiaozhou Fan, Zhe Wang, Hanwen Bi, Ruiyang Wang

Accurate forecasting of carbon emissions from power generation enterprises is essential under China's dual-control policy. Although deep learning methods show strong potential, studies on their optimal configuration remain limited. This paper proposed a hybrid deep learning framework integrating a convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and an attention mechanism for carbon emission prediction in natural gas power plants. The present study utilized two distinct optimization methodologies: a structured design strategy encompassing light, medium, and heavy configurations, while the other employed Bayesian optimization for hyperparameter tuning. The models were evaluated using 5-fold cross-validation on 619 operational samples from two 487.1-MW condensing units in a power plant in Hainan, China. The medium configuration achieved the best balance between accuracy, efficiency, and stability, with R2 = 0.9833, RMSE = 0.0342, and MAE = 0.0242. Under small-sample conditions, the structured design approach outperformed Bayesian optimization by 0.16% in accuracy while requiring only 7.42% of the training time. The proposed framework provides an efficient and interpretable reference for selecting deep learning architectures in small-sample industrial regression tasks and supports intelligent, low-carbon power generation applications.

在中国的双重管制政策下,准确预测发电企业的碳排放至关重要。尽管深度学习方法显示出强大的潜力,但对其最优配置的研究仍然有限。本文提出了一种融合卷积神经网络(CNN)、双向长短期记忆(BiLSTM)和注意机制的混合深度学习框架,用于天然气电厂碳排放预测。本研究使用了两种不同的优化方法:一种包含轻、中、重配置的结构化设计策略,而另一种采用贝叶斯优化进行超参数调优。对中国海南某电厂两台487.1 mw冷凝机组的619个运行样本进行了5倍交叉验证,对模型进行了评估。介质配置在准确性、效率和稳定性之间达到了最佳平衡,R2 = 0.9833, RMSE = 0.0342, MAE = 0.0242。在小样本条件下,结构化设计方法的准确率比贝叶斯优化方法高0.16%,而训练时间仅为贝叶斯优化方法的7.42%。该框架为在小样本工业回归任务中选择深度学习架构提供了有效且可解释的参考,并支持智能、低碳发电应用。
{"title":"Carbon Emission Prediction for Gas Power Plants Based on Deep Learning Under Small-Sample Conditions","authors":"Xiaozhou Fan,&nbsp;Zhe Wang,&nbsp;Hanwen Bi,&nbsp;Ruiyang Wang","doi":"10.1002/cpe.70591","DOIUrl":"10.1002/cpe.70591","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate forecasting of carbon emissions from power generation enterprises is essential under China's dual-control policy. Although deep learning methods show strong potential, studies on their optimal configuration remain limited. This paper proposed a hybrid deep learning framework integrating a convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and an attention mechanism for carbon emission prediction in natural gas power plants. The present study utilized two distinct optimization methodologies: a structured design strategy encompassing light, medium, and heavy configurations, while the other employed Bayesian optimization for hyperparameter tuning. The models were evaluated using 5-fold cross-validation on 619 operational samples from two 487.1-MW condensing units in a power plant in Hainan, China. The medium configuration achieved the best balance between accuracy, efficiency, and stability, with R<sup>2</sup> = 0.9833, RMSE = 0.0342, and MAE = 0.0242. Under small-sample conditions, the structured design approach outperformed Bayesian optimization by 0.16% in accuracy while requiring only 7.42% of the training time. The proposed framework provides an efficient and interpretable reference for selecting deep learning architectures in small-sample industrial regression tasks and supports intelligent, low-carbon power generation applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Real-Time Automated Library Inventory System Based on Edge-Cloud Collaboration 基于边缘云协作的实时自动化图书馆库存系统
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70573
Lu Zhu, Zhihui Gu, Kai Zhu, Xingcheng Xu, Jingzhi Wang, Yuanyuan Liu

Library inventory is vital for collection management and reader satisfaction. Conventional manual methods cannot support real-time updates, while existing automated solutions relying on centralized cloud computing suffer from bandwidth and latency limitations. To address these issues, we propose an edge-cloud collaborative real-time book inventory system. Spine detection and text recognition are executed on embedded edge devices, while the cloud handles rapid data retrieval to balance timeliness and accuracy. We design lightweight models for edge deployment, including the Library You Only Look Once (Lib-YOLO) detector with a StarNet backbone, shared convolutional head, and dual-scale hierarchical detection, supporting rotated objects for robust spine extraction. The optimized paddle practical optical character recognition (PP-OCR) pipeline removes text rectification and integrates a filtering algorithm to reduce redundant computation and improve efficiency. Deployed on an NVIDIA Jetson Nano, the system achieves 73 ms spine detection latency, 191 ms text recognition latency, and 97.1% overall accuracy under simulated library conditions. The Lib-YOLO model contains only 1.39 M parameters with 99% mean average precision (mAP), demonstrating the feasibility of precise, real-time inventorying in resource-constrained embedded environments.

图书馆库存对馆藏管理和读者满意度至关重要。传统的手动方法无法支持实时更新,而依赖集中式云计算的现有自动化解决方案则受到带宽和延迟的限制。为了解决这些问题,我们提出了一个边缘云协作实时图书库存系统。脊柱检测和文本识别在嵌入式边缘设备上执行,而云处理快速数据检索以平衡及时性和准确性。我们为边缘部署设计了轻量级模型,包括带有StarNet主干的Library You Only Look Once (Lib-YOLO)检测器,共享卷积头和双尺度分层检测,支持旋转对象进行鲁棒脊柱提取。优化后的桨形实用光学字符识别(PP-OCR)管道去除了文本纠错,并集成了滤波算法,减少了冗余计算,提高了效率。该系统部署在NVIDIA Jetson Nano上,在模拟图书馆条件下实现了73毫秒的脊柱检测延迟,191毫秒的文本识别延迟和97.1%的总体准确率。Lib-YOLO模型仅包含1.39 M个参数,平均平均精度(mAP)为99%,证明了在资源受限的嵌入式环境中实现精确、实时库存的可行性。
{"title":"A Real-Time Automated Library Inventory System Based on Edge-Cloud Collaboration","authors":"Lu Zhu,&nbsp;Zhihui Gu,&nbsp;Kai Zhu,&nbsp;Xingcheng Xu,&nbsp;Jingzhi Wang,&nbsp;Yuanyuan Liu","doi":"10.1002/cpe.70573","DOIUrl":"10.1002/cpe.70573","url":null,"abstract":"<div>\u0000 \u0000 <p>Library inventory is vital for collection management and reader satisfaction. Conventional manual methods cannot support real-time updates, while existing automated solutions relying on centralized cloud computing suffer from bandwidth and latency limitations. To address these issues, we propose an edge-cloud collaborative real-time book inventory system. Spine detection and text recognition are executed on embedded edge devices, while the cloud handles rapid data retrieval to balance timeliness and accuracy. We design lightweight models for edge deployment, including the Library You Only Look Once (Lib-YOLO) detector with a StarNet backbone, shared convolutional head, and dual-scale hierarchical detection, supporting rotated objects for robust spine extraction. The optimized paddle practical optical character recognition (PP-OCR) pipeline removes text rectification and integrates a filtering algorithm to reduce redundant computation and improve efficiency. Deployed on an NVIDIA Jetson Nano, the system achieves 73 ms spine detection latency, 191 ms text recognition latency, and 97.1% overall accuracy under simulated library conditions. The Lib-YOLO model contains only 1.39 M parameters with 99% mean average precision (mAP), demonstrating the feasibility of precise, real-time inventorying in resource-constrained embedded environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SecureChain: A Blockchain-Based Secure Model for Sharing Privacy-Preserved Data Using Local Differential Privacy SecureChain:一种基于区块链的安全模型,用于使用本地差分隐私共享隐私保护数据
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70473
Altaf Hussain, Laraib Javed, Muhammad Inam Ul Haq, Razaullah Khan, Wajahat Akbar, Razaz Waheeb Attar, Ahmed Alhazmi, Amal Hassan Alhazmi, Tariq Hussain

Privacy-Preserving Data Sharing (PPDS) masks the individual's collected data (e.g., medical healthcare data) before being disseminated by organizations for analysis and research. Patient data contains sensitive values that must be dealt with while ensuring certain privacy conditions are met. This minimizes the risk of re-identification of an individual record from the group of privacy-preserved data. However, with the advancement in technology (i.e., Big Data, the Internet of Things (IoT), and Blockchain), the existing classical privacy-preserving techniques are becoming obsolete. In this paper, we propose a blockchain-based secure data sharing technique named “SecureChain”, which preserves the privacy of an individual record using local differential privacy (LDP). The three distinguished features of the proposed approach are lower latency, higher throughput, and improved privacy. The proposed model outperforms the benchmarks in terms of both latency and throughput. In terms of precision, the proposed method improves the accuracy to 88.53% compared to its counterparts, which achieved 49% and 85% accuracy. The results of the experiment verify that the proposed approach outperforms its counterparts.

保护隐私的数据共享(PPDS)在组织传播用于分析和研究之前掩盖个人收集的数据(例如医疗保健数据)。患者数据包含必须处理的敏感值,同时确保满足某些隐私条件。这最大限度地减少了从一组隐私保护数据中重新识别单个记录的风险。然而,随着技术的进步(即大数据、物联网(IoT)和区块链),现有的经典隐私保护技术已经过时。在本文中,我们提出了一种名为“SecureChain”的基于区块链的安全数据共享技术,该技术使用本地差分隐私(LDP)来保护单个记录的隐私。该方法的三个显著特征是低延迟、高吞吐量和改进的隐私性。所建议的模型在延迟和吞吐量方面都优于基准测试。在精度方面,与同类方法相比,本文方法的准确率达到了49%和85%,提高了88.53%。实验结果验证了该方法的有效性。
{"title":"SecureChain: A Blockchain-Based Secure Model for Sharing Privacy-Preserved Data Using Local Differential Privacy","authors":"Altaf Hussain,&nbsp;Laraib Javed,&nbsp;Muhammad Inam Ul Haq,&nbsp;Razaullah Khan,&nbsp;Wajahat Akbar,&nbsp;Razaz Waheeb Attar,&nbsp;Ahmed Alhazmi,&nbsp;Amal Hassan Alhazmi,&nbsp;Tariq Hussain","doi":"10.1002/cpe.70473","DOIUrl":"10.1002/cpe.70473","url":null,"abstract":"<div>\u0000 \u0000 <p>Privacy-Preserving Data Sharing (PPDS) masks the individual's collected data (e.g., medical healthcare data) before being disseminated by organizations for analysis and research. Patient data contains sensitive values that must be dealt with while ensuring certain privacy conditions are met. This minimizes the risk of re-identification of an individual record from the group of privacy-preserved data. However, with the advancement in technology (i.e., Big Data, the Internet of Things (IoT), and Blockchain), the existing classical privacy-preserving techniques are becoming obsolete. In this paper, we propose a blockchain-based secure data sharing technique named “SecureChain”, which preserves the privacy of an individual record using local differential privacy (LDP). The three distinguished features of the proposed approach are lower latency, higher throughput, and improved privacy. The proposed model outperforms the benchmarks in terms of both latency and throughput. In terms of precision, the proposed method improves the accuracy to 88.53% compared to its counterparts, which achieved 49% and 85% accuracy. The results of the experiment verify that the proposed approach outperforms its counterparts.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction 面向跨城市交通预测的个性化联邦时空建模方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70577
Ying Wang, Renjie Fan, Bo Gong, Hong Wen, Yuanxi Yu

For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.

对于跨城市交通预测,由于城市间交通数据的显著异质性和对隐私保护的要求,使得传统的集中式时空图建模技术难以平衡预测性能和数据安全性。为此,本文提出了一种专门为跨城市交通预测设计的个性化联邦时空建模方法AT-SPNet。该方法通过构建一个共享的时间分支和一个隐藏的局部空间分支来解耦时空建模路径,从而在保护隐私的同时减轻了跨城市交通数据的异质性。在时间分支中,采用门控循环单元和多头注意机制来捕获时间依赖性,并采用挤压和激励模块来增强信息特征的提取。在空间分支中,设计了基于三注意机制的空间注意融合模块,从多个空间视角捕捉空间特征,结合静态图卷积和动态图注意构建双模态信息融合路径。此外,为了缓解跨城市数据异构对联邦训练的不利影响,提出了一种个性化的联邦学习策略,在不共享原始数据的情况下实现客户端空间特征的差异化融合。在4个真实交通数据集上的实验表明,AT-SPNet在预测精度和跨城市泛化方面都优于现有方法,验证了该方法在跨城市交通预测中的有效性和实用性。
{"title":"AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction","authors":"Ying Wang,&nbsp;Renjie Fan,&nbsp;Bo Gong,&nbsp;Hong Wen,&nbsp;Yuanxi Yu","doi":"10.1002/cpe.70577","DOIUrl":"https://doi.org/10.1002/cpe.70577","url":null,"abstract":"<div>\u0000 \u0000 <p>For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1