首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Cybersecurity-Driven Strategy: Resilient Base Stations Deployment for Robust Open RAN 5G/6G Networks 网络安全驱动战略:面向强健开放RAN 5G/6G网络的弹性基站部署
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-12 DOI: 10.1002/cpe.70524
Ibtihal A. Alablani, Mohammed J. F. Alenazi

The proliferation of Open Radio Access Network (O-RAN) architectures in 5G/6G networks introduces unprecedented cybersecurity challenges. Strategic base station deployment constitutes a fundamental determinant of network security posture and cyberattack resilience. In this paper, a novel cybersecurity-driven deployment strategy for resilient base station positioning using an intelligent Resilient Ant Colony Optimization (iResACO) algorithm. The algorithm integrates security considerations directly into deployment optimization, employing bio-inspired collective intelligence to discover patterns that balance coverage efficiency with attack resilience. Through extensive simulations in a 3.6 km ×$$ times $$ 3.6 km urban environment in Riyadh, Saudi Arabia, experimental results demonstrate superior performance achieving 92.04% overall effectiveness with 96.0% coverage probability and 100% critical infrastructure protection. Under various cyberattack scenarios ranging from random to coordinated sophisticated attacks, the algorithm maintains coverage above 87% while preserving complete protection of critical facilities. The proposed approach provides a practical framework for deploying secure, resilient 5G/6G networks capable of withstanding evolving cyber threats while ensuring uninterrupted service to essential infrastructure.

开放无线接入网(O-RAN)架构在5G/6G网络中的扩散带来了前所未有的网络安全挑战。战略性基站部署是网络安全态势和网络攻击抵御能力的基本决定因素。本文采用智能弹性蚁群优化(iResACO)算法,提出了一种新的网络安全驱动的弹性基站定位部署策略。该算法将安全考虑直接集成到部署优化中,采用生物启发的集体智能来发现平衡覆盖效率和攻击弹性的模式。通过在沙特阿拉伯利雅得3.6 km × $$ times $$ 3.6 km的城市环境中进行的大量模拟,实验结果表明性能优异,达到92.04% overall effectiveness with 96.0% coverage probability and 100% critical infrastructure protection. Under various cyberattack scenarios ranging from random to coordinated sophisticated attacks, the algorithm maintains coverage above 87% while preserving complete protection of critical facilities. The proposed approach provides a practical framework for deploying secure, resilient 5G/6G networks capable of withstanding evolving cyber threats while ensuring uninterrupted service to essential infrastructure.
{"title":"Cybersecurity-Driven Strategy: Resilient Base Stations Deployment for Robust Open RAN 5G/6G Networks","authors":"Ibtihal A. Alablani,&nbsp;Mohammed J. F. Alenazi","doi":"10.1002/cpe.70524","DOIUrl":"https://doi.org/10.1002/cpe.70524","url":null,"abstract":"<div>\u0000 \u0000 <p>The proliferation of Open Radio Access Network (O-RAN) architectures in 5G/6G networks introduces unprecedented cybersecurity challenges. Strategic base station deployment constitutes a fundamental determinant of network security posture and cyberattack resilience. In this paper, a novel cybersecurity-driven deployment strategy for resilient base station positioning using an intelligent Resilient Ant Colony Optimization (iResACO) algorithm. The algorithm integrates security considerations directly into deployment optimization, employing bio-inspired collective intelligence to discover patterns that balance coverage efficiency with attack resilience. Through extensive simulations in a 3.6 km <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math> 3.6 km urban environment in Riyadh, Saudi Arabia, experimental results demonstrate superior performance achieving 92.04% overall effectiveness with 96.0% coverage probability and 100% critical infrastructure protection. Under various cyberattack scenarios ranging from random to coordinated sophisticated attacks, the algorithm maintains coverage above 87% while preserving complete protection of critical facilities. The proposed approach provides a practical framework for deploying secure, resilient 5G/6G networks capable of withstanding evolving cyber threats while ensuring uninterrupted service to essential infrastructure.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication Frequency in Megatron-LM: Experimental Insights Applied to Heterogeneous Distributed Training Time Prediction Megatron-LM中的通信频率:应用于异构分布式训练时间预测的实验见解
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-12 DOI: 10.1002/cpe.70500
HaoRan Zhang, Yanzhao Feng, Zhengwei Chen, Yutong Tian, Xiaoli Zheng, Cong Liu, Sheng Wang, Jie Ren, Yucong Li, Rui Zhu

As model parameters increase exponentially, distributed training has become essential for advancing modern deep neural networks. Megatron-LM, an efficient distributed training framework developed by NVIDIA, enables the training of trillion-parameter models on thousands of GPUs by integrating tensor, pipeline, and data parallelism. Its computational efficiency has established it as a foundational tool for training large-scale models. Rapid identification of optimal parallel configurations for specific GPU clusters is critical for maximizing computational resource utilization, with training time prediction serving as a key evaluation metric. The high cost and limited availability of high-performance GPUs, particularly those based on NVIDIA architectures, have made the construction of large-scale heterogeneous clusters a practical solution to resource and cost constraints. However, existing prediction methods do not reliably or efficiently account for the computational and communication complexities inherent in heterogeneous GPU clusters. To address this gap, HATP (Heterogeneous-Aware Time Predictor) is introduced as a novel performance prediction method specifically designed for heterogeneous GPU clusters. For any given parallel configuration, HATP rapidly and accurately simulates execution times to inform the optimization of parallel strategies. To address communication differences among heterogeneous GPUs, comprehensive experimental analyses are conducted and analytical expressions are derived to characterize the communication frequency patterns in Megatron-LM's parallel strategies. This work presents the first systematic quantification of communication operations within Megatron-LM framework, ensuring that performance predictions remain highly accurate even in complex, heterogeneous environments. Furthermore, to account for computational differences among heterogeneous GPUs, a layer-level computational performance acquisition scheme is proposed to reduce the impact of fine-grained operator overlap and additional memory operations. Experimental results demonstrate that HATP achieves an average prediction accuracy of 97.41% in isomorphic environments, surpassing the current state-of-the-art method, ACEso. HATP also attains an average accuracy of 96.04% in heterogeneous data parallel and pipeline parallel configurations, representing the first extension of training time prediction capabilities to heterogeneous environments.

随着模型参数呈指数级增长,分布式训练对现代深度神经网络的发展至关重要。Megatron-LM是由NVIDIA开发的高效分布式训练框架,通过集成张量、管道和数据并行性,可以在数千个gpu上训练数万亿参数的模型。它的计算效率使其成为训练大规模模型的基础工具。快速识别特定GPU集群的最佳并行配置对于最大化计算资源利用率至关重要,训练时间预测是一个关键的评估指标。高性能gpu的高成本和有限的可用性,特别是基于NVIDIA架构的gpu,使得构建大规模异构集群成为解决资源和成本限制的实际解决方案。然而,现有的预测方法不能可靠或有效地解释异构GPU集群中固有的计算和通信复杂性。为了解决这一差距,HATP(异构感知时间预测器)作为一种专门为异构GPU集群设计的新型性能预测方法被引入。对于任何给定的并行配置,HATP快速准确地模拟执行时间,以通知并行策略的优化。为了解决异构gpu之间的通信差异,进行了全面的实验分析,并推导了表征Megatron-LM并行策略中通信频率模式的解析表达式。这项工作提出了Megatron-LM框架内通信操作的第一个系统量化,确保即使在复杂的异构环境中,性能预测也保持高度准确。此外,为了考虑异构gpu之间的计算差异,提出了一种层级计算性能获取方案,以减少细粒度运算符重叠和额外内存操作的影响。实验结果表明,在同构环境下,HATP的平均预测准确率达到97.41%,超过了目前最先进的ACEso方法。在异构数据并行和管道并行配置下,HATP的平均准确率达到96.04%,这是训练时间预测能力首次扩展到异构环境。
{"title":"Communication Frequency in Megatron-LM: Experimental Insights Applied to Heterogeneous Distributed Training Time Prediction","authors":"HaoRan Zhang,&nbsp;Yanzhao Feng,&nbsp;Zhengwei Chen,&nbsp;Yutong Tian,&nbsp;Xiaoli Zheng,&nbsp;Cong Liu,&nbsp;Sheng Wang,&nbsp;Jie Ren,&nbsp;Yucong Li,&nbsp;Rui Zhu","doi":"10.1002/cpe.70500","DOIUrl":"https://doi.org/10.1002/cpe.70500","url":null,"abstract":"<div>\u0000 \u0000 <p>As model parameters increase exponentially, distributed training has become essential for advancing modern deep neural networks. Megatron-LM, an efficient distributed training framework developed by NVIDIA, enables the training of trillion-parameter models on thousands of GPUs by integrating tensor, pipeline, and data parallelism. Its computational efficiency has established it as a foundational tool for training large-scale models. Rapid identification of optimal parallel configurations for specific GPU clusters is critical for maximizing computational resource utilization, with training time prediction serving as a key evaluation metric. The high cost and limited availability of high-performance GPUs, particularly those based on NVIDIA architectures, have made the construction of large-scale heterogeneous clusters a practical solution to resource and cost constraints. However, existing prediction methods do not reliably or efficiently account for the computational and communication complexities inherent in heterogeneous GPU clusters. To address this gap, HATP (Heterogeneous-Aware Time Predictor) is introduced as a novel performance prediction method specifically designed for heterogeneous GPU clusters. For any given parallel configuration, HATP rapidly and accurately simulates execution times to inform the optimization of parallel strategies. To address communication differences among heterogeneous GPUs, comprehensive experimental analyses are conducted and analytical expressions are derived to characterize the communication frequency patterns in Megatron-LM's parallel strategies. This work presents the first systematic quantification of communication operations within Megatron-LM framework, ensuring that performance predictions remain highly accurate even in complex, heterogeneous environments. Furthermore, to account for computational differences among heterogeneous GPUs, a layer-level computational performance acquisition scheme is proposed to reduce the impact of fine-grained operator overlap and additional memory operations. Experimental results demonstrate that HATP achieves an average prediction accuracy of 97.41% in isomorphic environments, surpassing the current state-of-the-art method, ACEso. HATP also attains an average accuracy of 96.04% in heterogeneous data parallel and pipeline parallel configurations, representing the first extension of training time prediction capabilities to heterogeneous environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Ensemble of Swarm-Based Evolutionary Learning Strategies for UAV Path-Planning Problem 无人机路径规划问题的基于群的进化学习策略集成
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-12 DOI: 10.1002/cpe.70536
Shrishti Chamoli, Anupam Yadav

The increasing application of unmanned aerial vehicles (UAVs) in diverse domains demands highly robust and autonomous path-planning algorithms capable of navigating complex and dynamic environments. To address the multifaceted challenges posed by obstacle avoidance, energy constraints, and environmental uncertainty, this work proposes an ensemble of learning strategies for the optimal path planning of UAVs. We introduce a modular particle swarm optimization and differential evolution (PSO-DE) ensemble framework and systematically investigate the impact of multiple learning and adaptation strategies, such as chaotic parameter adaptation, opposition-based learning (OBL), and a range of DE mutation schemes, to enhance the optimization process. We perform extensive experimentation across 16 carefully designed scenarios with varying complexity against ten competitive algorithms. We demonstrate that the integration of the PSO-DE hybrid with the opposition-based learning (OBLPSODE) achieves faster convergence while maintaining superior solution quality across all scenarios. The proposed OBLPSODE algorithm substantially outperforms other hybrid variants in both computational efficiency and path optimality, particularly excelling in cluttered environments where traditional algorithms often converge prematurely. Beyond algorithmic contributions, this work provides critical complexity analysis identifying obstacle-checking operations as the primary computational bottleneck in UAV path planning. The findings offer practical guidance for deploying UAVs in real-world applications and establish transferable design principles for developing adaptive meta-heuristics in complex optimization domains.

无人机在不同领域的应用越来越广泛,需要高度鲁棒和自主的路径规划算法,能够在复杂和动态的环境中导航。为了解决避障、能源约束和环境不确定性带来的多方面挑战,本研究提出了一套用于无人机最优路径规划的学习策略。引入模块化粒子群优化与差分进化(PSO-DE)集成框架,并系统研究了多种学习和适应策略(如混沌参数自适应、基于对手的学习(OBL)和一系列差分进化突变方案)对优化过程的影响。我们在16个精心设计的场景中对10种竞争算法进行了广泛的实验,这些场景具有不同的复杂性。我们证明了PSO-DE混合与基于对立的学习(OBLPSODE)的集成实现了更快的收敛,同时在所有场景中保持了卓越的解决方案质量。提出的OBLPSODE算法在计算效率和路径最优性方面都大大优于其他混合变体,特别是在传统算法经常过早收敛的混乱环境中表现出色。除了算法贡献之外,这项工作提供了关键的复杂性分析,将障碍物检查操作识别为无人机路径规划中的主要计算瓶颈。研究结果为在实际应用中部署无人机提供了实用指导,并为在复杂优化领域中开发自适应元启发式方法建立了可转移的设计原则。
{"title":"An Ensemble of Swarm-Based Evolutionary Learning Strategies for UAV Path-Planning Problem","authors":"Shrishti Chamoli,&nbsp;Anupam Yadav","doi":"10.1002/cpe.70536","DOIUrl":"https://doi.org/10.1002/cpe.70536","url":null,"abstract":"<div>\u0000 \u0000 <p>The increasing application of unmanned aerial vehicles (UAVs) in diverse domains demands highly robust and autonomous path-planning algorithms capable of navigating complex and dynamic environments. To address the multifaceted challenges posed by obstacle avoidance, energy constraints, and environmental uncertainty, this work proposes an ensemble of learning strategies for the optimal path planning of UAVs. We introduce a modular particle swarm optimization and differential evolution (PSO-DE) ensemble framework and systematically investigate the impact of multiple learning and adaptation strategies, such as chaotic parameter adaptation, opposition-based learning (OBL), and a range of DE mutation schemes, to enhance the optimization process. We perform extensive experimentation across 16 carefully designed scenarios with varying complexity against ten competitive algorithms. We demonstrate that the integration of the PSO-DE hybrid with the opposition-based learning (OBLPSODE) achieves faster convergence while maintaining superior solution quality across all scenarios. The proposed OBLPSODE algorithm substantially outperforms other hybrid variants in both computational efficiency and path optimality, particularly excelling in cluttered environments where traditional algorithms often converge prematurely. Beyond algorithmic contributions, this work provides critical complexity analysis identifying obstacle-checking operations as the primary computational bottleneck in UAV path planning. The findings offer practical guidance for deploying UAVs in real-world applications and establish transferable design principles for developing adaptive meta-heuristics in complex optimization domains.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCF-Net: Spatial-Channel Fusion and Feature Refinement for Vessel Re-Identification SCF-Net:空间通道融合和特征细化用于舰船再识别
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-12 DOI: 10.1002/cpe.70545
Gangzhu Lin, Yongguo Ling, Yuting He, Wenhao Shao, Shaozi Li, Hongfeng Xu

Vessel re-identification (ReID) plays a critical role in maritime surveillance by matching vessels across different camera views. Compared with person or vehicle ReID, vessel ReID faces unique challenges due to subtle interclass differences and large intraclass variations caused by viewpoint changes. These issues are further exacerbated by the highly similar appearances of vessels and the lack of fine-grained identity cues commonly found in other ReID tasks. To address these challenges, we propose a spatial-channel fusion network (SCF-Net), a dual-branch deep framework that integrates a spatial-channel fusion (SCF) module and a feature refinement and alignment (FRA) module. The SCF module captures interdependent relationships between spatial and channel dimensions, enabling the network to emphasize discriminative regions while suppressing irrelevant background information. The FRA module refines high-dimensional embeddings into a compact representation and enforces intraclass similarity via a learnable multilayer perceptron (MLP) and a supervised mean squared error (MSE) loss. By jointly optimizing the two branches and the FRA output, SCF-Net effectively learns both interclass discrimination and intraclass compactness. Extensive experiments demonstrate that SCF-Net achieves competitive performance on public vessel ReID benchmarks, highlighting its effectiveness in handling subtle interclass differences and large intraclass variations.

船舶再识别(ReID)通过在不同的摄像机视图中匹配船舶,在海上监视中起着至关重要的作用。与人或车辆的ReID相比,船舶ReID面临着独特的挑战,因为它具有微妙的类间差异和视点变化导致的较大的类内差异。这些问题进一步加剧了血管高度相似的外观,以及缺乏其他ReID任务中常见的细粒度身份线索。为了应对这些挑战,我们提出了一个空间信道融合网络(SCF- net),这是一个双分支深度框架,集成了空间信道融合(SCF)模块和特征细化和对齐(FRA)模块。SCF模块捕获空间和通道维度之间的相互依存关系,使网络能够强调区别区域,同时抑制无关的背景信息。FRA模块将高维嵌入细化为紧凑的表示,并通过可学习的多层感知器(MLP)和监督均方误差(MSE)损失来增强类内相似性。通过联合优化两个分支和FRA输出,SCF-Net有效地学习了类间判别和类内紧密性。大量实验表明,SCF-Net在公共船舶ReID基准测试中取得了具有竞争力的性能,突出了其在处理微妙的船级间差异和较大的船级内变化方面的有效性。
{"title":"SCF-Net: Spatial-Channel Fusion and Feature Refinement for Vessel Re-Identification","authors":"Gangzhu Lin,&nbsp;Yongguo Ling,&nbsp;Yuting He,&nbsp;Wenhao Shao,&nbsp;Shaozi Li,&nbsp;Hongfeng Xu","doi":"10.1002/cpe.70545","DOIUrl":"https://doi.org/10.1002/cpe.70545","url":null,"abstract":"<div>\u0000 \u0000 <p>Vessel re-identification (ReID) plays a critical role in maritime surveillance by matching vessels across different camera views. Compared with person or vehicle ReID, vessel ReID faces unique challenges due to subtle interclass differences and large intraclass variations caused by viewpoint changes. These issues are further exacerbated by the highly similar appearances of vessels and the lack of fine-grained identity cues commonly found in other ReID tasks. To address these challenges, we propose a spatial-channel fusion network (SCF-Net), a dual-branch deep framework that integrates a spatial-channel fusion (SCF) module and a feature refinement and alignment (FRA) module. The SCF module captures interdependent relationships between spatial and channel dimensions, enabling the network to emphasize discriminative regions while suppressing irrelevant background information. The FRA module refines high-dimensional embeddings into a compact representation and enforces intraclass similarity via a learnable multilayer perceptron (MLP) and a supervised mean squared error (MSE) loss. By jointly optimizing the two branches and the FRA output, SCF-Net effectively learns both interclass discrimination and intraclass compactness. Extensive experiments demonstrate that SCF-Net achieves competitive performance on public vessel ReID benchmarks, highlighting its effectiveness in handling subtle interclass differences and large intraclass variations.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Tunnel Collapse Prediction Using Multi-Modal Gaussian Cross-Attention Fusion (MGCAF): Integration of TBM Parameters and Geological Radar Data 基于多模态高斯交叉注意融合(MGCAF)的隧道塌陷智能预测:TBM参数与地质雷达数据的集成
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-11 DOI: 10.1002/cpe.70542
Youliang Chen, Wencan Guan, Rafig Azzam, Suran Wang, Yungui Pan, Chao Yan

Tunnel face instability prediction represents a critical technical challenge in underground engineering, particularly during tunnel boring machine (TBM) excavation under complex geological conditions. This study proposes the Multi-modal Gaussian Cross-Attention Fusion (MGCAF) algorithm, which integrates physics-constrained Gaussian processes with cross-attention mechanisms to achieve intelligent tunnel collapse prediction. The MGCAF framework reconstructs the traditional prediction paradigm by treating earth pressure balance chamber pressure as the primary prediction target rather than an input parameter, while incorporating first-principles constraints of TBM cutting mechanisms into kernel function design. The algorithm employs a dual-pathway architecture that fuses TBM operational parameters through temporal modeling, processes geological radar images via deep feature extraction, and achieves cross-modal information fusion through physics-constrained cross-attention mechanisms. Dynamic kernel optimization enables real-time adaptive parameter adjustment through multi-source gradient feedback. Validation results based on the Yinsong Water Diversion Tunnel project (20 km length, 9 collapse events) demonstrate that the algorithm achieves high-precision prediction with R2 = 0.8330, successfully predicting major collapse locations with approximately 20-m accuracy. Comparative analysis against baseline methods (Transformer, Gaussian Process, Random Forest, XGBoost) indicates that MGCAF exhibits superior performance in engineering reliability (0.95) and ROC-AUC (0.765) metrics. Generalization testing on the 2025 Los Angeles Wilmington Sewage Outfall Tunnel confirms the algorithm's cross-domain applicability. Ablation experiments reveal that the cross-attention mechanism serves as the primary performance driver, while uncertainty quantification provides interpretable risk assessment for TBM operations in heterogeneous geological environments.

隧道工作面失稳预测是地下工程,特别是复杂地质条件下隧道掘进机开挖中的一个关键技术难题。本研究提出了多模态高斯交叉注意融合(MGCAF)算法,该算法将物理约束的高斯过程与交叉注意机制相结合,实现隧道塌陷智能预测。MGCAF框架重构了传统的预测范式,将土压力平衡腔室压力作为主要预测目标而非输入参数,并将掘进机切削机构的第一性原理约束纳入核函数设计。该算法采用双路径架构,通过时间建模融合掘进机运行参数,通过深度特征提取处理地质雷达图像,通过物理约束的交叉关注机制实现跨模态信息融合。动态核优化通过多源梯度反馈实现参数的实时自适应调整。基于银松引水隧洞工程(长度20 km,塌方事件9次)的验证结果表明,该算法预测精度较高,R2 = 0.8330,成功预测主要塌方位置,精度约为20 m。与基线方法(Transformer, Gaussian Process, Random Forest, XGBoost)的比较分析表明,MGCAF在工程可靠性(0.95)和ROC-AUC(0.765)指标上表现出优越的性能。对2025年洛杉矶威尔明顿污水排放隧道的泛化测试证实了该算法的跨域适用性。消融实验表明,交叉关注机制是主要的性能驱动因素,而不确定性量化为非均质地质环境下的TBM操作提供了可解释的风险评估。
{"title":"Intelligent Tunnel Collapse Prediction Using Multi-Modal Gaussian Cross-Attention Fusion (MGCAF): Integration of TBM Parameters and Geological Radar Data","authors":"Youliang Chen,&nbsp;Wencan Guan,&nbsp;Rafig Azzam,&nbsp;Suran Wang,&nbsp;Yungui Pan,&nbsp;Chao Yan","doi":"10.1002/cpe.70542","DOIUrl":"https://doi.org/10.1002/cpe.70542","url":null,"abstract":"<div>\u0000 \u0000 <p>Tunnel face instability prediction represents a critical technical challenge in underground engineering, particularly during tunnel boring machine (TBM) excavation under complex geological conditions. This study proposes the Multi-modal Gaussian Cross-Attention Fusion (MGCAF) algorithm, which integrates physics-constrained Gaussian processes with cross-attention mechanisms to achieve intelligent tunnel collapse prediction. The MGCAF framework reconstructs the traditional prediction paradigm by treating earth pressure balance chamber pressure as the primary prediction target rather than an input parameter, while incorporating first-principles constraints of TBM cutting mechanisms into kernel function design. The algorithm employs a dual-pathway architecture that fuses TBM operational parameters through temporal modeling, processes geological radar images via deep feature extraction, and achieves cross-modal information fusion through physics-constrained cross-attention mechanisms. Dynamic kernel optimization enables real-time adaptive parameter adjustment through multi-source gradient feedback. Validation results based on the Yinsong Water Diversion Tunnel project (20 km length, 9 collapse events) demonstrate that the algorithm achieves high-precision prediction with <i>R</i><sup>2</sup> = 0.8330, successfully predicting major collapse locations with approximately 20-m accuracy. Comparative analysis against baseline methods (Transformer, Gaussian Process, Random Forest, XGBoost) indicates that MGCAF exhibits superior performance in engineering reliability (0.95) and ROC-AUC (0.765) metrics. Generalization testing on the 2025 Los Angeles Wilmington Sewage Outfall Tunnel confirms the algorithm's cross-domain applicability. Ablation experiments reveal that the cross-attention mechanism serves as the primary performance driver, while uncertainty quantification provides interpretable risk assessment for TBM operations in heterogeneous geological environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145970035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OD-H-SABE: A Hierarchical Searchable Attribute-Based Encryption Scheme With Outsourced Decryption for Blockchain-Based Data Sharing OD-H-SABE:基于区块链的数据共享的分层可搜索属性加密方案和外包解密
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-11 DOI: 10.1002/cpe.70510
Gaimei Gao, Yiqing Wei, Jingyue Wang, Chunxia Liu, Junji Li

The growing demand for data sharing in domains such as health care highlights limitations in existing solutions, including low search efficiency, coarse-grained access control, and heavy decryption overhead on users. To address these challenges, this paper proposes a hierarchical searchable attribute-based encryption scheme with outsourced decryption for blockchain-based data sharing (OD-H-SABE). OD-H-SABE introduces a hierarchical attribute structure alongside an outsourced decryption mechanism that offloads computationally intensive bilinear operations to the cloud server. Consequently, users only need to perform a single lightweight operation to complete decryption, significantly alleviating the computational burden. Furthermore, the scheme integrates searchable encryption with multi-keyword aggregate hashing, enabling efficient search with constant complexity regardless of the number of keywords. Leveraging the transparency and immutability of blockchain, smart contracts verify the integrity of results returned from the cloud server, ensuring data security and trustworthiness throughout the sharing process. Theoretical and experimental analyses demonstrate that OD-H-SABE achieves notable advantages over traditional schemes in terms of security, search efficiency, and computational overhead. For example, compared to MKS-VABE and BEM-ABSE, OD-H-SABE reduces encryption and user-side decryption overhead by approximately 20% and 42%, respectively. This makes it a practical and lightweight solution for constructing secure and efficient blockchain-based data-sharing platforms.

医疗保健等领域对数据共享的需求日益增长,这突出了现有解决方案的局限性,包括搜索效率低、粗粒度访问控制和用户繁重的解密开销。为了解决这些挑战,本文提出了一种分层可搜索的基于属性的加密方案,并将解密外包给基于区块链的数据共享(OD-H-SABE)。OD-H-SABE引入了分层属性结构和外包解密机制,将计算密集型双线性操作卸载到云服务器。因此,用户只需要执行一个轻量级的操作就可以完成解密,大大减轻了计算负担。此外,该方案将可搜索加密与多关键字聚合散列相结合,无论关键字数量多少,都能实现具有恒定复杂度的高效搜索。利用区块链的透明性和不可变性,智能合约验证从云服务器返回的结果的完整性,确保整个共享过程中的数据安全性和可信度。理论和实验分析表明,OD-H-SABE在安全性、搜索效率和计算开销方面比传统方案具有显著优势。例如,与MKS-VABE和BEM-ABSE相比,OD-H-SABE分别减少了大约20%和42%的加密和用户端解密开销。这使得它成为构建安全高效的基于区块链的数据共享平台的实用和轻量级解决方案。
{"title":"OD-H-SABE: A Hierarchical Searchable Attribute-Based Encryption Scheme With Outsourced Decryption for Blockchain-Based Data Sharing","authors":"Gaimei Gao,&nbsp;Yiqing Wei,&nbsp;Jingyue Wang,&nbsp;Chunxia Liu,&nbsp;Junji Li","doi":"10.1002/cpe.70510","DOIUrl":"https://doi.org/10.1002/cpe.70510","url":null,"abstract":"<div>\u0000 \u0000 <p>The growing demand for data sharing in domains such as health care highlights limitations in existing solutions, including low search efficiency, coarse-grained access control, and heavy decryption overhead on users. To address these challenges, this paper proposes a hierarchical searchable attribute-based encryption scheme with outsourced decryption for blockchain-based data sharing (OD-H-SABE). OD-H-SABE introduces a hierarchical attribute structure alongside an outsourced decryption mechanism that offloads computationally intensive bilinear operations to the cloud server. Consequently, users only need to perform a single lightweight operation to complete decryption, significantly alleviating the computational burden. Furthermore, the scheme integrates searchable encryption with multi-keyword aggregate hashing, enabling efficient search with constant complexity regardless of the number of keywords. Leveraging the transparency and immutability of blockchain, smart contracts verify the integrity of results returned from the cloud server, ensuring data security and trustworthiness throughout the sharing process. Theoretical and experimental analyses demonstrate that OD-H-SABE achieves notable advantages over traditional schemes in terms of security, search efficiency, and computational overhead. For example, compared to MKS-VABE and BEM-ABSE, OD-H-SABE reduces encryption and user-side decryption overhead by approximately 20% and 42%, respectively. This makes it a practical and lightweight solution for constructing secure and efficient blockchain-based data-sharing platforms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Market Portfolio Optimization via Structure-Aware Deep Reinforcement Learning 基于结构感知深度强化学习的跨市场投资组合优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-11 DOI: 10.1002/cpe.70540
Yiliang Qiao, Yan Zhu, Xu Guo

Financial markets exhibit high levels of non-stationarity and structural heterogeneity, which pose significant challenges to reinforcement learning (RL)-based portfolio optimization methods. To address these challenges, this paper proposes a Structure-Aware Deep Reinforcement Learning (SADRL) framework for cross-market portfolio optimization. The proposed framework explicitly models market structural dynamics through a structure encoder that identifies latent market regimes, while a policy learner adapts investment strategies accordingly. This dual-level learning mechanism enables the model to generalize across heterogeneous markets and remain stable under regime shifts. Extensive experiments on multiple cross-market datasets demonstrate that SADRL achieves superior risk-adjusted returns and improved robustness compared with conventional RL-based baselines. These findings highlight the potential of structure-aware learning for developing intelligent and adaptive decision-making systems in financial markets.

金融市场表现出高度的非平稳性和结构异质性,这对基于强化学习(RL)的投资组合优化方法提出了重大挑战。为了解决这些挑战,本文提出了一个结构感知深度强化学习(SADRL)框架,用于跨市场投资组合优化。所提出的框架通过识别潜在市场机制的结构编码器明确地对市场结构动态进行建模,而政策学习者则相应地调整投资策略。这种双层学习机制使模型能够在异质市场中泛化,并在制度变化下保持稳定。在多个跨市场数据集上进行的大量实验表明,与传统的基于rl的基线相比,SADRL获得了更高的风险调整收益和更好的鲁棒性。这些发现突出了结构感知学习在金融市场中开发智能和适应性决策系统的潜力。
{"title":"Cross-Market Portfolio Optimization via Structure-Aware Deep Reinforcement Learning","authors":"Yiliang Qiao,&nbsp;Yan Zhu,&nbsp;Xu Guo","doi":"10.1002/cpe.70540","DOIUrl":"https://doi.org/10.1002/cpe.70540","url":null,"abstract":"<div>\u0000 \u0000 <p>Financial markets exhibit high levels of non-stationarity and structural heterogeneity, which pose significant challenges to reinforcement learning (RL)-based portfolio optimization methods. To address these challenges, this paper proposes a Structure-Aware Deep Reinforcement Learning (SADRL) framework for cross-market portfolio optimization. The proposed framework explicitly models market structural dynamics through a structure encoder that identifies latent market regimes, while a policy learner adapts investment strategies accordingly. This dual-level learning mechanism enables the model to generalize across heterogeneous markets and remain stable under regime shifts. Extensive experiments on multiple cross-market datasets demonstrate that SADRL achieves superior risk-adjusted returns and improved robustness compared with conventional RL-based baselines. These findings highlight the potential of structure-aware learning for developing intelligent and adaptive decision-making systems in financial markets.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145970089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle Detection and Tracking Method for Highway Fog Scene: Fusion Improvement of AG-YOLOv10n and DeepSORT 公路雾景车辆检测与跟踪方法:AG-YOLOv10n与DeepSORT的融合改进
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-07 DOI: 10.1002/cpe.70553
Liu Liqun, Xie Yupeng, Liu Ting

Fog presents a substantial latent hazard to highway traffic safety, significantly impairing drivers' visibility, thereby elevating the risk of high-speed collisions. While driving, occlusion of multiple targets against complex backgrounds diminishes the detection rate of vehicle detectors. Many existing vehicle detection methods depend on bounding box representations for vehicle identification, limiting their capacity to provide accurate localization, particularly in foggy highway conditions. To enable early warnings of preceding vehicles in fog, this article proposes AG-YOLOv10n, a novel vehicle detection method for foggy environments. This approach improves the model's adaptability to fog-induced target features by replacing standard convolutional layers with AKConv and incorporating the GCAM gated convolutional attention module to enhance the extraction of locally salient information, thereby improving vehicle recognition accuracy in fog. Simultaneously, the DeepSORT tracking algorithm is enhanced, with AG-YOLOv10n replacing the traditional Faster R-CNN detector, and combined with the Kalman filter and Hungarian matching mechanism to achieve stable tracking of vehicle targets. The proposed method enhances the accuracy, recall rate, and average precision of the baseline model by 1.4%, 0.6%, and 1.1%, respectively, on the foggy vehicle dataset. The results demonstrate that the proposed method effectively improves detection accuracy, real-time performance, and system robustness while maintaining the model's lightweight nature, which holds significant practical application for highway fog driving safety.

雾对高速公路交通安全构成了巨大的潜在危险,严重影响了驾驶员的能见度,从而增加了高速碰撞的风险。在驾驶过程中,复杂背景下的多目标遮挡降低了车辆检测器的检测率。许多现有的车辆检测方法依赖于边界框表示来进行车辆识别,这限制了它们提供准确定位的能力,特别是在多雾的公路条件下。为了在大雾环境中对前车进行预警,本文提出了一种新的大雾环境车辆检测方法AG-YOLOv10n。该方法通过用AKConv代替标准卷积层,结合GCAM门控卷积注意模块增强局部显著信息的提取,提高模型对雾致目标特征的适应性,从而提高雾中车辆的识别精度。同时,对DeepSORT跟踪算法进行了改进,用AG-YOLOv10n取代了传统的Faster R-CNN检测器,并结合卡尔曼滤波和匈牙利匹配机制,实现了对车辆目标的稳定跟踪。该方法在雾天车辆数据集上,将基准模型的准确率、召回率和平均精度分别提高1.4%、0.6%和1.1%。结果表明,该方法在保持模型轻量化的同时,有效提高了检测精度、实时性和系统鲁棒性,对公路雾天行驶安全具有重要的实际应用价值。
{"title":"Vehicle Detection and Tracking Method for Highway Fog Scene: Fusion Improvement of AG-YOLOv10n and DeepSORT","authors":"Liu Liqun,&nbsp;Xie Yupeng,&nbsp;Liu Ting","doi":"10.1002/cpe.70553","DOIUrl":"https://doi.org/10.1002/cpe.70553","url":null,"abstract":"<div>\u0000 \u0000 <p>Fog presents a substantial latent hazard to highway traffic safety, significantly impairing drivers' visibility, thereby elevating the risk of high-speed collisions. While driving, occlusion of multiple targets against complex backgrounds diminishes the detection rate of vehicle detectors. Many existing vehicle detection methods depend on bounding box representations for vehicle identification, limiting their capacity to provide accurate localization, particularly in foggy highway conditions. To enable early warnings of preceding vehicles in fog, this article proposes AG-YOLOv10n, a novel vehicle detection method for foggy environments. This approach improves the model's adaptability to fog-induced target features by replacing standard convolutional layers with AKConv and incorporating the GCAM gated convolutional attention module to enhance the extraction of locally salient information, thereby improving vehicle recognition accuracy in fog. Simultaneously, the DeepSORT tracking algorithm is enhanced, with AG-YOLOv10n replacing the traditional Faster R-CNN detector, and combined with the Kalman filter and Hungarian matching mechanism to achieve stable tracking of vehicle targets. The proposed method enhances the accuracy, recall rate, and average precision of the baseline model by 1.4%, 0.6%, and 1.1%, respectively, on the foggy vehicle dataset. The results demonstrate that the proposed method effectively improves detection accuracy, real-time performance, and system robustness while maintaining the model's lightweight nature, which holds significant practical application for highway fog driving safety.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Federated Learning for Detecting False Data Injection Attacks in Power Grids 基于个性化联邦学习的电网假数据注入攻击检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-07 DOI: 10.1002/cpe.70543
Mengwei Lv, Ruijuan Zheng, Junlong Zhu, Yongsheng Dong, Qingtao Wu, Xuhui Zhao

In the context of security protection against false data injection attacks (FDIAs) in power grids, traditional federated learning effectively utilizes decentralized data resources for distributed training and achieves global collaboration. However, during the model aggregation process, it often overlooks or drowns out local sparse key features, significantly increasing the risk of missed detection of specific attack patterns. To address this issue, this paper proposes a personalized detection framework based on federated learning. Initially, the bidirectional transformer detection (BTD) model detection algorithm is deployed on the client side and trained on local data. Subsequently, through personalized federated learning, the client dynamically combines the weights of the global and local models to generate a personalized detection model. The framework employs a collaborative optimization mechanism of “global knowledge sharing and local feature adaptation” to effectively mitigate the feature drowning problem while strictly safeguarding data privacy. Compared to existing methods, this approach significantly enhances detection accuracy and robustness against differentiated attack patterns, thereby establishing a more reliable security defense system for smart grids.

在电网安全防范虚假数据注入攻击的背景下,传统的联邦学习有效地利用分散的数据资源进行分布式训练,实现全局协同。然而,在模型聚合过程中,它经常忽略或淹没局部稀疏的关键特征,大大增加了错过检测特定攻击模式的风险。为了解决这一问题,本文提出了一种基于联邦学习的个性化检测框架。首先,在客户端部署双向变压器检测(BTD)模型检测算法,并对本地数据进行训练。随后,客户端通过个性化的联邦学习,动态结合全局模型和局部模型的权重,生成个性化的检测模型。该框架采用“全局知识共享、局部特征自适应”的协同优化机制,在严格保护数据隐私的同时,有效缓解了特征淹没问题。与现有方法相比,该方法显著提高了对差异化攻击模式的检测精度和鲁棒性,从而为智能电网建立了更加可靠的安全防御体系。
{"title":"Personalized Federated Learning for Detecting False Data Injection Attacks in Power Grids","authors":"Mengwei Lv,&nbsp;Ruijuan Zheng,&nbsp;Junlong Zhu,&nbsp;Yongsheng Dong,&nbsp;Qingtao Wu,&nbsp;Xuhui Zhao","doi":"10.1002/cpe.70543","DOIUrl":"https://doi.org/10.1002/cpe.70543","url":null,"abstract":"<div>\u0000 \u0000 <p>In the context of security protection against false data injection attacks (FDIAs) in power grids, traditional federated learning effectively utilizes decentralized data resources for distributed training and achieves global collaboration. However, during the model aggregation process, it often overlooks or drowns out local sparse key features, significantly increasing the risk of missed detection of specific attack patterns. To address this issue, this paper proposes a personalized detection framework based on federated learning. Initially, the bidirectional transformer detection (BTD) model detection algorithm is deployed on the client side and trained on local data. Subsequently, through personalized federated learning, the client dynamically combines the weights of the global and local models to generate a personalized detection model. The framework employs a collaborative optimization mechanism of “global knowledge sharing and local feature adaptation” to effectively mitigate the feature drowning problem while strictly safeguarding data privacy. Compared to existing methods, this approach significantly enhances detection accuracy and robustness against differentiated attack patterns, thereby establishing a more reliable security defense system for smart grids.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Telecommunication Fraud Detection via Improved Graph Convolution and Bidirectional Temporal Learning With Adaptive Fusion Strategy 基于改进图卷积和双向时间学习的电信欺诈检测与自适应融合策略
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70544
Abdulrahman Mathkar Alotaibi

Telecommunication fraud has escalated in complexity due to evolving adversarial strategies that exploit dynamic communication patterns, multimodal signals, and semantic manipulation. Recent developments in deep learning and graph-based modeling have shown promise; however, existing systems struggle to simultaneously capture temporal dependencies, relational feature structures, and linguistic nuances embedded in modern fraud activities. Addressing these limitations, this study proposes an Improved Graph Convolutional Network–Bidirectional LSTM (IGCN–Bi-LSTM) framework integrated within a unified signal-to-text and multi-perspective feature-engineering pipeline for high-accuracy fraud detection. The system begins by converting raw telecommunication signals into structured textual representations through a CNN-driven signal-to-text encoder, enabling the extraction of temporal–spectral patterns. These sequences are subsequently enriched through a comprehensive feature-engineering module that synthesizes linguistic markers, statistical descriptors, lexical indicators, and semantic embeddings. The hybrid IGCN–Bi-LSTM model then jointly learns higher-order relational dependencies among features and bidirectional temporal patterns, while an adaptive score-level fusion mechanism optimally weights model outputs for robust classification. Experiments were conducted using a high-quality synthetic Fraud Detection Transactions Dataset comprising 50,000 transactions with 21 heterogeneous attributes covering behavioral, contextual, financial, and security-related characteristics. Extensive preprocessing, normalization, and stratified data partitioning ensured reliable training of the hybrid model in a GPU-accelerated environment. The proposed model demonstrated substantial improvements over baseline methods by effectively capturing weakly correlated, high-dimensional features and rare-event patterns. Performance evaluation using precision-recall metrics confirmed the superiority of the IGCN–Bi-LSTM fusion, particularly in highly imbalanced scenarios where conventional accuracy metrics fail to reflect true detection capability.

由于不断发展的利用动态通信模式、多模态信号和语义操纵的对抗策略,电信欺诈的复杂性已经升级。深度学习和基于图的建模的最新发展显示出了希望;然而,现有的系统很难同时捕捉现代欺诈活动中嵌入的时间依赖性、关系特征结构和语言细微差别。针对这些限制,本研究提出了一种改进的图卷积网络双向LSTM (IGCN-Bi-LSTM)框架,该框架集成在统一的信号到文本和多视角特征工程管道中,用于高精度欺诈检测。该系统首先通过cnn驱动的信号到文本编码器将原始电信信号转换为结构化文本表示,从而能够提取时间谱模式。这些序列随后通过综合语言标记、统计描述符、词汇指示符和语义嵌入的综合特征工程模块进行丰富。然后,混合IGCN-Bi-LSTM模型联合学习特征之间的高阶关系依赖关系和双向时间模式,同时自适应分数级融合机制对模型输出进行最优加权,以实现鲁棒分类。实验使用高质量的合成欺诈检测事务数据集进行,该数据集包含50,000个事务,具有21个异构属性,涵盖行为,上下文,财务和安全相关特征。广泛的预处理、规范化和分层数据划分确保了混合模型在gpu加速环境中的可靠训练。该模型通过有效捕获弱相关、高维特征和罕见事件模式,比基线方法有了实质性的改进。使用精度召回指标的性能评估证实了IGCN-Bi-LSTM融合的优越性,特别是在高度不平衡的情况下,传统的精度指标无法反映真实的检测能力。
{"title":"Telecommunication Fraud Detection via Improved Graph Convolution and Bidirectional Temporal Learning With Adaptive Fusion Strategy","authors":"Abdulrahman Mathkar Alotaibi","doi":"10.1002/cpe.70544","DOIUrl":"https://doi.org/10.1002/cpe.70544","url":null,"abstract":"<div>\u0000 \u0000 <p>Telecommunication fraud has escalated in complexity due to evolving adversarial strategies that exploit dynamic communication patterns, multimodal signals, and semantic manipulation. Recent developments in deep learning and graph-based modeling have shown promise; however, existing systems struggle to simultaneously capture temporal dependencies, relational feature structures, and linguistic nuances embedded in modern fraud activities. Addressing these limitations, this study proposes an Improved Graph Convolutional Network–Bidirectional LSTM (IGCN–Bi-LSTM) framework integrated within a unified signal-to-text and multi-perspective feature-engineering pipeline for high-accuracy fraud detection. The system begins by converting raw telecommunication signals into structured textual representations through a CNN-driven signal-to-text encoder, enabling the extraction of temporal–spectral patterns. These sequences are subsequently enriched through a comprehensive feature-engineering module that synthesizes linguistic markers, statistical descriptors, lexical indicators, and semantic embeddings. The hybrid IGCN–Bi-LSTM model then jointly learns higher-order relational dependencies among features and bidirectional temporal patterns, while an adaptive score-level fusion mechanism optimally weights model outputs for robust classification. Experiments were conducted using a high-quality synthetic Fraud Detection Transactions Dataset comprising 50,000 transactions with 21 heterogeneous attributes covering behavioral, contextual, financial, and security-related characteristics. Extensive preprocessing, normalization, and stratified data partitioning ensured reliable training of the hybrid model in a GPU-accelerated environment. The proposed model demonstrated substantial improvements over baseline methods by effectively capturing weakly correlated, high-dimensional features and rare-event patterns. Performance evaluation using precision-recall metrics confirmed the superiority of the IGCN–Bi-LSTM fusion, particularly in highly imbalanced scenarios where conventional accuracy metrics fail to reflect true detection capability.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1