首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction 面向跨城市交通预测的个性化联邦时空建模方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70577
Ying Wang, Renjie Fan, Bo Gong, Hong Wen, Yuanxi Yu

For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.

对于跨城市交通预测,由于城市间交通数据的显著异质性和对隐私保护的要求,使得传统的集中式时空图建模技术难以平衡预测性能和数据安全性。为此,本文提出了一种专门为跨城市交通预测设计的个性化联邦时空建模方法AT-SPNet。该方法通过构建一个共享的时间分支和一个隐藏的局部空间分支来解耦时空建模路径,从而在保护隐私的同时减轻了跨城市交通数据的异质性。在时间分支中,采用门控循环单元和多头注意机制来捕获时间依赖性,并采用挤压和激励模块来增强信息特征的提取。在空间分支中,设计了基于三注意机制的空间注意融合模块,从多个空间视角捕捉空间特征,结合静态图卷积和动态图注意构建双模态信息融合路径。此外,为了缓解跨城市数据异构对联邦训练的不利影响,提出了一种个性化的联邦学习策略,在不共享原始数据的情况下实现客户端空间特征的差异化融合。在4个真实交通数据集上的实验表明,AT-SPNet在预测精度和跨城市泛化方面都优于现有方法,验证了该方法在跨城市交通预测中的有效性和实用性。
{"title":"AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction","authors":"Ying Wang,&nbsp;Renjie Fan,&nbsp;Bo Gong,&nbsp;Hong Wen,&nbsp;Yuanxi Yu","doi":"10.1002/cpe.70577","DOIUrl":"https://doi.org/10.1002/cpe.70577","url":null,"abstract":"<div>\u0000 \u0000 <p>For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ThreadMonitor: Low-Overhead Data Race Detection Using Intel Processor Trace 使用英特尔处理器跟踪的低开销数据竞争检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70517
Farzam Dorostkar, Michel Dagenais, Ankush Tyagi, Vince Bridgers

Data races are among the most difficult multithreading bugs to find, due to their non-deterministic nature. This and the increasing popularity of multithreaded programming have led to the need for practical automated data race detection. In this context, dynamic data race detectors have received more attention, compared to static tools, owing to their higher accuracy and scalability. Yet, state-of-the-art dynamic data race detectors cannot be used in many real-world testing scenarios, since they cause significant slowdown and memory overhead. Notably, ThreadSanitizer (TSan), the default dynamic data race detector in both clang and gcc compilers, is reported to typically impose a 5×$$ 5times $$15×$$ 15times $$ slowdown and a 5×$$ 5times $$10×$$ 10times $$ memory overhead, which is not tolerable in many industrial use cases. To address this issue, this paper introduces ThreadMonitor (TMon), a low-overhead postmortem data race detector for multithreaded C/C++ programs that use the Pthread library. At runtime, TMon traces the information required for detecting occurrences of data races (i.e., shared memory accesses and timing constraints among threads) using Intel Processor Trace (Intel PT), a non-intrusive hardware feature dedicated to tracing software execution. Thereafter, its postmortem analyzer examines the collected trace data to determine whether the traced program execution exhibited data races, performing a verification similar to that carried out by TSan at runtime. Introducing algorithmic improvements in its postmortem analyzer, TMon can further achieve a higher data race detection coverage compared to TSan. TMon has no direct data memory overhead, incurs minimal instruction memory overhead, and causes a very small slowdown, making it an ideal choice in test environments with limited resources.

由于数据竞争的不确定性,它是最难发现的多线程错误之一。这一点以及多线程编程的日益普及导致了对实际自动化数据争用检测的需求。在这种情况下,与静态工具相比,动态数据竞争检测器由于其更高的准确性和可扩展性而受到了更多的关注。然而,最先进的动态数据竞争检测器不能用于许多实际的测试场景,因为它们会导致显著的减速和内存开销。值得注意的是,ThreadSanitizer (TSan), clang和gcc编译器中默认的动态数据争用检测器,通常会施加5 × $$ 5times $$ - 15 × $$ 15times $$的减速和5 × $$ 5times $$ -10 × $$ 10times $$内存开销,这在许多工业用例中是不可容忍的。为了解决这个问题,本文介绍了ThreadMonitor (TMon),这是一个低开销的事后数据竞争检测器,适用于使用Pthread库的多线程C/ c++程序。在运行时,TMon使用Intel Processor Trace (Intel PT)跟踪检测数据竞争(即线程之间的共享内存访问和定时约束)发生所需的信息,这是一种专用于跟踪软件执行的非侵入性硬件特性。之后,它的事后分析程序检查收集的跟踪数据,以确定跟踪的程序执行是否显示了数据竞争,并执行类似于TSan在运行时执行的验证。在其事后分析分析器中引入算法改进,与TSan相比,TMon可以进一步实现更高的数据竞争检测覆盖率。TMon没有直接的数据内存开销,产生最小的指令内存开销,并且导致非常小的减速,使其成为资源有限的测试环境中的理想选择。
{"title":"ThreadMonitor: Low-Overhead Data Race Detection Using Intel Processor Trace","authors":"Farzam Dorostkar,&nbsp;Michel Dagenais,&nbsp;Ankush Tyagi,&nbsp;Vince Bridgers","doi":"10.1002/cpe.70517","DOIUrl":"https://doi.org/10.1002/cpe.70517","url":null,"abstract":"<p>Data races are among the most difficult multithreading bugs to find, due to their non-deterministic nature. This and the increasing popularity of multithreaded programming have led to the need for practical automated data race detection. In this context, dynamic data race detectors have received more attention, compared to static tools, owing to their higher accuracy and scalability. Yet, state-of-the-art dynamic data race detectors cannot be used in many real-world testing scenarios, since they cause significant slowdown and memory overhead. Notably, ThreadSanitizer (TSan), the default dynamic data race detector in both clang and gcc compilers, is reported to typically impose a <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 5times $$</annotation>\u0000 </semantics></math>–<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>15</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 15times $$</annotation>\u0000 </semantics></math> slowdown and a <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 5times $$</annotation>\u0000 </semantics></math>–<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>10</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 10times $$</annotation>\u0000 </semantics></math> memory overhead, which is not tolerable in many industrial use cases. To address this issue, this paper introduces ThreadMonitor (TMon), a low-overhead postmortem data race detector for multithreaded C/C++ programs that use the Pthread library. At runtime, TMon traces the information required for detecting occurrences of data races (i.e., shared memory accesses and timing constraints among threads) using Intel Processor Trace (Intel PT), a non-intrusive hardware feature dedicated to tracing software execution. Thereafter, its postmortem analyzer examines the collected trace data to determine whether the traced program execution exhibited data races, performing a verification similar to that carried out by TSan at runtime. Introducing algorithmic improvements in its postmortem analyzer, TMon can further achieve a higher data race detection coverage compared to TSan. TMon has no direct data memory overhead, incurs minimal instruction memory overhead, and causes a very small slowdown, making it an ideal choice in test environments with limited resources.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Performance Analysis of RPC Frameworks in Public Cloud Environments 公有云环境下RPC框架性能对比分析
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70523
Grzegorz Blinowski, Bartłomiej Pełka

Remote procedure call (RPC) technology has become a cornerstone of modern cloud computing, enabling efficient and seamless communication between distributed services. In cloud infrastructures, where scalability, interoperability, and performance are critical, RPC frameworks play a key role in abstracting the complexities of network communication. Despite their ubiquity, relatively little up-to-date empirical research exists on the comparative performance of RPC frameworks across cloud environments. And, to our best knowledge, there is no comparative study in which different cloud platforms and RPC frameworks were directly compared. This paper addresses that gap by presenting the results of a series of experiments evaluating the performance and scalability of four major RPC frameworks—ONC RPC, gRPC, Web-RPC, and JSON-RPC—across the three most widely used cloud platforms: AWS, Azure, and Google Cloud. The experiments were based on a test suite comprising four distinct RPC call types with varying argument sizes and complexities. Each configuration was executed multiple times with client loads ranging from 1 to 8, with 300,000 runs per test. Total call latency was used as the primary performance measure and analyzed statistically. The results reveal a nuanced picture: while ONC RPC consistently delivers the best performance, no other framework or platform emerges as a clear overall leader. Across all cloud environments and workloads, ONC RPC consistently outperformed the other frameworks. It proved to be at least 20% faster than its nearest competitor and, in some cases, up to four times faster than the second best. By contrast, in short-argument tests, gRPC performed unexpectedly poorly; it often ranked close to or below the text-based frameworks. Particularly surprising was its poor performance under high load in Google Cloud—the platform where it could be expected to perform best. However, gRPC ranked second in the large-argument tests, while text-based frameworks show relatively poor performance as the argument size increases. We discuss and explain these findings in detail and provide guidelines for selecting the most suitable RPC technology for different use cases.

远程过程调用(RPC)技术已经成为现代云计算的基石,实现了分布式服务之间高效、无缝的通信。在可伸缩性、互操作性和性能至关重要的云基础设施中,RPC框架在抽象网络通信的复杂性方面发挥着关键作用。尽管RPC框架无处不在,但关于跨云环境的RPC框架性能比较的最新实证研究相对较少。而且,据我们所知,还没有直接比较不同云平台和RPC框架的比较研究。本文通过展示一系列实验的结果来解决这一差距,这些实验评估了四个主要RPC框架(onc RPC、gRPC、Web-RPC和json -RPC)在三个最广泛使用的云平台(AWS、Azure和谷歌cloud)上的性能和可扩展性。实验基于一个测试套件,该测试套件包含四种不同的RPC调用类型,具有不同的参数大小和复杂性。每个配置执行多次,客户机负载从1到8不等,每个测试运行30万次。总呼叫延迟被用作主要性能度量并进行统计分析。结果揭示了一个微妙的情况:虽然ONC RPC始终提供最佳性能,但没有其他框架或平台成为明显的整体领导者。在所有云环境和工作负载中,ONC RPC始终优于其他框架。事实证明,它比最接近的竞争对手至少快20%,在某些情况下,比第二名快四倍。相比之下,在短参数测试中,gRPC的表现出乎意料地糟糕;它的排名通常接近或低于基于文本的框架。尤其令人惊讶的是,它在谷歌云平台的高负载下表现不佳,而谷歌云平台本应是它表现最好的平台。然而,gRPC在大参数测试中排名第二,而基于文本的框架随着参数大小的增加表现出相对较差的性能。我们将详细讨论和解释这些发现,并提供针对不同用例选择最合适RPC技术的指导方针。
{"title":"Comparative Performance Analysis of RPC Frameworks in Public Cloud Environments","authors":"Grzegorz Blinowski,&nbsp;Bartłomiej Pełka","doi":"10.1002/cpe.70523","DOIUrl":"https://doi.org/10.1002/cpe.70523","url":null,"abstract":"<div>\u0000 \u0000 <p>Remote procedure call (RPC) technology has become a cornerstone of modern cloud computing, enabling efficient and seamless communication between distributed services. In cloud infrastructures, where scalability, interoperability, and performance are critical, RPC frameworks play a key role in abstracting the complexities of network communication. Despite their ubiquity, relatively little up-to-date empirical research exists on the comparative performance of RPC frameworks across cloud environments. And, to our best knowledge, there is no comparative study in which different cloud platforms and RPC frameworks were directly compared. This paper addresses that gap by presenting the results of a series of experiments evaluating the performance and scalability of four major RPC frameworks—ONC RPC, gRPC, Web-RPC, and JSON-RPC—across the three most widely used cloud platforms: AWS, Azure, and Google Cloud. The experiments were based on a test suite comprising four distinct RPC call types with varying argument sizes and complexities. Each configuration was executed multiple times with client loads ranging from 1 to 8, with 300,000 runs per test. Total call latency was used as the primary performance measure and analyzed statistically. The results reveal a nuanced picture: while ONC RPC consistently delivers the best performance, no other framework or platform emerges as a clear overall leader. Across all cloud environments and workloads, ONC RPC consistently outperformed the other frameworks. It proved to be at least 20% faster than its nearest competitor and, in some cases, up to four times faster than the second best. By contrast, in short-argument tests, gRPC performed unexpectedly poorly; it often ranked close to or below the text-based frameworks. Particularly surprising was its poor performance under high load in Google Cloud—the platform where it could be expected to perform best. However, gRPC ranked second in the large-argument tests, while text-based frameworks show relatively poor performance as the argument size increases. We discuss and explain these findings in detail and provide guidelines for selecting the most suitable RPC technology for different use cases.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lattice-Based Public Auditing Schemes for Cloud Storage Security: A Comprehensive Survey 基于栅格的云存储安全公共审计方案综述
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70556
Renuka Cheeturi, Syam Kumar Pasupuleti, Rashmi Ranjan Rout

Public auditing is a method used to verify the integrity of data stored in the cloud without requiring access to the actual data. However, the advancement of quantum computers poses significant security threats to existing public auditing schemes, as these schemes are based on conventional cryptography hard problems, which are vulnerable to quantum attacks. To address this, NIST has launched the development of post-quantum cryptographic primitives and protocols. Among the various approaches, lattice-based cryptography (LBC) is considered one of the most promising candidates due to its strong security guarantees and inherent resistance to quantum attacks. Leveraging LBC, several researchers have proposed lattice-based public auditing (LBPA) schemes for cloud storage security based on lattice hardness assumptions. This paper provides a comprehensive survey of existing LBPA schemes for cloud storage, presenting a detailed taxonomy and analyzing their similarities, differences, and performance. Additionally, it highlights key challenges and outlines future research directions for designing efficient and secure public auditing schemes in the post-quantum era.

公共审计是一种在不需要访问实际数据的情况下验证存储在云中的数据完整性的方法。然而,量子计算机的进步对现有的公共审计方案构成了重大的安全威胁,因为这些方案基于传统的密码难题,容易受到量子攻击。为了解决这个问题,NIST已经启动了后量子加密原语和协议的开发。在各种方法中,基于格的加密(LBC)由于其强大的安全性保证和对量子攻击的固有抵抗力而被认为是最有前途的候选方法之一。利用LBC,一些研究人员提出了基于点阵硬度假设的云存储安全的基于点阵的公共审计(LBPA)方案。本文全面概述了现有的云存储LBPA方案,给出了详细的分类,并分析了它们的异同和性能。此外,它还强调了在后量子时代设计高效、安全的公共审计方案的关键挑战和未来研究方向。
{"title":"Lattice-Based Public Auditing Schemes for Cloud Storage Security: A Comprehensive Survey","authors":"Renuka Cheeturi,&nbsp;Syam Kumar Pasupuleti,&nbsp;Rashmi Ranjan Rout","doi":"10.1002/cpe.70556","DOIUrl":"https://doi.org/10.1002/cpe.70556","url":null,"abstract":"<div>\u0000 \u0000 <p>Public auditing is a method used to verify the integrity of data stored in the cloud without requiring access to the actual data. However, the advancement of quantum computers poses significant security threats to existing public auditing schemes, as these schemes are based on conventional cryptography hard problems, which are vulnerable to quantum attacks. To address this, NIST has launched the development of post-quantum cryptographic primitives and protocols. Among the various approaches, lattice-based cryptography (LBC) is considered one of the most promising candidates due to its strong security guarantees and inherent resistance to quantum attacks. Leveraging LBC, several researchers have proposed lattice-based public auditing (LBPA) schemes for cloud storage security based on lattice hardness assumptions. This paper provides a comprehensive survey of existing LBPA schemes for cloud storage, presenting a detailed taxonomy and analyzing their similarities, differences, and performance. Additionally, it highlights key challenges and outlines future research directions for designing efficient and secure public auditing schemes in the post-quantum era.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion” 对“基于信念测度加权融合的食用菌识别增强模型”的修正
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70589

S. Yang, H. Wang, L. Huang, and X. Ma, “ Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion,” Concurrency and Computation: Practice and Experience 38, no. 1 (2026): e70520, https://doi.org/10.1002/cpe.70520.

In the first paragraph of Section 3.1.1 “Multicolor-Space Representation System,” the reference to Figure 4 was incorrect. The correct reference should be Figure 5.

In the first paragraph of Section 3.1.2 “Three-channel Probability-based Classifier,” the reference to Figure 5 was incorrect. The correct reference should be Figure 4.

In the last paragraph of Section 3.2.1 “Formulate the Basic Probability Assignment (BPA),” the text “The detailed calculation process of this case will be elaborated in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “The detailed calculation process of this case will be elaborated in Section 3.2.4 (A Case Study).”

In the last paragraph of Section 3.2.2.2 “Properties of the Belief Cosine Similarity Coefficient,” the text “This will be discussed in detail in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “This will be discussed in detail in Section 3.2.4 (A Case Study).”

In the first paragraph of Section 4.7.2 “Ablation study 2,” the reference to Figure 10 was incorrect. The correct reference should be Figure 11.

In the first paragraph of Section 4.7.4 “Ablation study 4,” the reference to Figure 11 was incorrect. The correct reference should be Figure 10.

We apologize for this error.

杨生,王红红,黄丽丽,马晓霞,“基于信念测度加权融合的食用菌识别增强模型”,《并行计算与实践》,第38期。1 (2026): e70520, https://doi.org/10.1002/cpe.70520.In第3.1.1节“多色空间表示系统”的第一段,对图4的引用是不正确的。正确的引用应该如图5所示。在第3.1.2节“基于概率的三通道分类器”的第一段中,对图5的引用是不正确的。正确的引用应该如图4所示。在第3.2.1节“制定基本概率分配(BPA)”的最后一段,文本“本案例的详细计算过程将在第3.2.3节(A case Study)中详细阐述”是不正确的。正确的表述应该是:“本案例的详细计算过程将在第3.2.4节(A case Study)中详细阐述。”在3.2.2.2节“信念余弦相似系数的性质”的最后一段中,文本“这将在3.2.3节(一个案例研究)中详细讨论”是不正确的。正确的表述应该是:“这将在第3.2.4节(案例研究)中详细讨论。”在第4.7.2节“消融研究2”的第一段中,对图10的引用是不正确的。正确的引用应该如图11所示。在第4.7.4节“消融研究4”的第一段中,对图11的引用是不正确的。正确的引用应该如图10所示。我们为这个错误道歉。
{"title":"Correction to “Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion”","authors":"","doi":"10.1002/cpe.70589","DOIUrl":"https://doi.org/10.1002/cpe.70589","url":null,"abstract":"<p>\u0000 <span>S. Yang</span>, <span>H. Wang</span>, <span>L. Huang</span>, and <span>X. Ma</span>, “ <span>Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion</span>,” <i>Concurrency and Computation: Practice and Experience</i> <span>38</span>, no. <span>1</span> (<span>2026</span>): e70520, https://doi.org/10.1002/cpe.70520.</p><p>In the first paragraph of Section 3.1.1 “Multicolor-Space Representation System,” the reference to Figure 4 was incorrect. The correct reference should be Figure 5.</p><p>In the first paragraph of Section 3.1.2 “Three-channel Probability-based Classifier,” the reference to Figure 5 was incorrect. The correct reference should be Figure 4.</p><p>In the last paragraph of Section 3.2.1 “Formulate the Basic Probability Assignment (BPA),” the text “The detailed calculation process of this case will be elaborated in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “The detailed calculation process of this case will be elaborated in Section 3.2.4 (A Case Study).”</p><p>In the last paragraph of Section 3.2.2.2 “Properties of the Belief Cosine Similarity Coefficient,” the text “This will be discussed in detail in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “This will be discussed in detail in Section 3.2.4 (A Case Study).”</p><p>In the first paragraph of Section 4.7.2 “Ablation study 2,” the reference to Figure 10 was incorrect. The correct reference should be Figure 11.</p><p>In the first paragraph of Section 4.7.4 “Ablation study 4,” the reference to Figure 11 was incorrect. The correct reference should be Figure 10.</p><p>We apologize for this error.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Vertex Partitioning Algorithm for Large-Scale Uncertain Graphs 大规模不确定图的顶点划分算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70580
Huanqing Cui, Anfu Chang, Jinbin Zhu, Ruixia Liu, Kekun Hu

With the exponential growth of graph-structured data, single-machine efficient analysis has become increasingly impractical, making high-performance distributed graph computing systems indispensable. The efficacy of these systems hinges critically on high-quality graph partitioning. The edges of many graphs stemmed from real applications are uncertain, but many existing graph partitioning algorithms are only for deterministic graphs without considering uncertainty. This paper presents a novel partitioning algorithm, PAUG (Partitioning Algorithm for Uncertain Graphs), tailored for uncertain graphs. First, it formalizes the partitioning problem as an optimization task to minimize the cut-edge ratio while balancing load. Second, it introduces probabilistic similarity to quantify vertex relationships under uncertainty. Finally, it details the PAUG algorithm which consists of initial partition phase and score-function-guided refinement strategy. Experimental results shows that PAUG achieves an average 23.2% reduction in cut-edge ratio and a 26.2% improvement in load balance over state-of-the-art algorithms.

随着图结构数据的指数级增长,单机高效分析变得越来越不现实,高性能的分布式图计算系统变得不可或缺。这些系统的有效性关键取决于高质量的图划分。实际应用中很多图的边缘都是不确定的,而现有的很多图划分算法只针对确定性图,没有考虑不确定性。本文提出了一种针对不确定图的分区算法pag (partitioning algorithm for不确定图)。首先,将分区问题形式化为在平衡负载的同时最小化切边比的优化任务。其次,引入概率相似度来量化不确定条件下的顶点关系。最后,详细介绍了由初始划分阶段和分数函数引导的改进策略组成的pag算法。实验结果表明,与最先进的算法相比,paaug的切割率平均降低23.2%,负载平衡平均提高26.2%。
{"title":"A Vertex Partitioning Algorithm for Large-Scale Uncertain Graphs","authors":"Huanqing Cui,&nbsp;Anfu Chang,&nbsp;Jinbin Zhu,&nbsp;Ruixia Liu,&nbsp;Kekun Hu","doi":"10.1002/cpe.70580","DOIUrl":"https://doi.org/10.1002/cpe.70580","url":null,"abstract":"<div>\u0000 \u0000 <p>With the exponential growth of graph-structured data, single-machine efficient analysis has become increasingly impractical, making high-performance distributed graph computing systems indispensable. The efficacy of these systems hinges critically on high-quality graph partitioning. The edges of many graphs stemmed from real applications are uncertain, but many existing graph partitioning algorithms are only for deterministic graphs without considering uncertainty. This paper presents a novel partitioning algorithm, PAUG (Partitioning Algorithm for Uncertain Graphs), tailored for uncertain graphs. First, it formalizes the partitioning problem as an optimization task to minimize the cut-edge ratio while balancing load. Second, it introduces probabilistic similarity to quantify vertex relationships under uncertainty. Finally, it details the PAUG algorithm which consists of initial partition phase and score-function-guided refinement strategy. Experimental results shows that PAUG achieves an average 23.2% reduction in cut-edge ratio and a 26.2% improvement in load balance over state-of-the-art algorithms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLOUD SMART: A Dynamic Scheduling Framework for Minimizing Response Time in Cloud Environments 云智能:在云环境中最小化响应时间的动态调度框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70561
S. Anuradha, K. Unnikrishnan, S. Reshma, O. Bhaskaru, Richa Sharma

Cloud computing offers on-demand access to computing resources; however, minimizing response time while ensuring compliance with Service Level Agreements (SLAs) remains a critical challenge. The proposed CLOUD SMART framework aims to intelligently minimize response time, process delays, and propagation latency in cloud environments through adaptive, SLA-aware dynamic scheduling. This study evaluates the model using the Cloud Workload Dataset for Scheduling Analysis, available on Kaggle. The dataset undergoes preprocessing, including median imputation for missing numerical values, and the derivation of key features such as response time, deadlines, and priority tiers for accurate workload profiling. Workload characterization through statistical profiling and clustering reveals patterns, arrival rates, and task categories that guide scheduling strategy selection. Baseline performance is established using discrete event simulation of standard policies such as FCFS, SJF/Min-Min, Max-Min, and EDF. A novel Predictive Deadline-Aware Hybrid Scheduling (PDHS) approach is integrated to predict completion times and dynamically switch scheduling strategies based on urgency. An execution and closed-loop feedback mechanism enables real-time adaptation. Experimental results show that CLOUD SMART significantly reduces response time, improves SLA compliance, and enhances resource utilization compared to static scheduling baselines. The PDHS model achieves an average response time of 6.72 s, significantly lower than all baselines. Average waiting time is reduced to 2.87 s, and Makespan improves to 138.4 s. SLA compliance reaches 97%, with a deadline miss ratio of only 3%. System throughput is enhanced to 38.5 tasks per second, and resource utilization climbs to 92%. Prediction accuracy excels with an MAE of 0.94 s and RMSE of 1.26 s.

云计算提供了对计算资源的按需访问;然而,在确保符合服务水平协议(sla)的同时最小化响应时间仍然是一个关键的挑战。提出的CLOUD SMART框架旨在通过自适应、sla感知的动态调度,智能地最小化云环境中的响应时间、过程延迟和传播延迟。本研究使用Kaggle上的云工作负载数据集来评估该模型。数据集经过预处理,包括缺失数值的中位数输入,以及关键特征的推导,如响应时间、截止日期和优先级,以进行准确的工作负载分析。通过统计分析和聚类对工作负载进行表征,可以揭示指导调度策略选择的模式、到达率和任务类别。基线性能是使用标准策略(如FCFS、SJF/Min-Min、Max-Min和EDF)的离散事件模拟来建立的。提出了一种预测截止时间感知混合调度(PDHS)方法,用于预测完工时间并根据紧急程度动态切换调度策略。执行和闭环反馈机制支持实时适应。实验结果表明,与静态调度基线相比,CLOUD SMART显著缩短了响应时间,提高了SLA合规性,提高了资源利用率。PDHS模型的平均响应时间为6.72秒,明显低于所有基线。平均等待时间减少到2.87秒,完工时间提高到138.4秒。SLA合规性达到97%,截止日期失约率仅为3%。系统吞吐量提高到每秒38.5个任务,资源利用率提高到92%。预测精度较高,MAE为0.94 s, RMSE为1.26 s。
{"title":"CLOUD SMART: A Dynamic Scheduling Framework for Minimizing Response Time in Cloud Environments","authors":"S. Anuradha,&nbsp;K. Unnikrishnan,&nbsp;S. Reshma,&nbsp;O. Bhaskaru,&nbsp;Richa Sharma","doi":"10.1002/cpe.70561","DOIUrl":"https://doi.org/10.1002/cpe.70561","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing offers on-demand access to computing resources; however, minimizing response time while ensuring compliance with Service Level Agreements (SLAs) remains a critical challenge. The proposed CLOUD SMART framework aims to intelligently minimize response time, process delays, and propagation latency in cloud environments through adaptive, SLA-aware dynamic scheduling. This study evaluates the model using the Cloud Workload Dataset for Scheduling Analysis, available on Kaggle. The dataset undergoes preprocessing, including median imputation for missing numerical values, and the derivation of key features such as response time, deadlines, and priority tiers for accurate workload profiling. Workload characterization through statistical profiling and clustering reveals patterns, arrival rates, and task categories that guide scheduling strategy selection. Baseline performance is established using discrete event simulation of standard policies such as FCFS, SJF/Min-Min, Max-Min, and EDF. A novel Predictive Deadline-Aware Hybrid Scheduling (PDHS) approach is integrated to predict completion times and dynamically switch scheduling strategies based on urgency. An execution and closed-loop feedback mechanism enables real-time adaptation. Experimental results show that CLOUD SMART significantly reduces response time, improves SLA compliance, and enhances resource utilization compared to static scheduling baselines. The PDHS model achieves an average response time of 6.72 s, significantly lower than all baselines. Average waiting time is reduced to 2.87 s, and Makespan improves to 138.4 s. SLA compliance reaches 97%, with a deadline miss ratio of only 3%. System throughput is enhanced to 38.5 tasks per second, and resource utilization climbs to 92%. Prediction accuracy excels with an MAE of 0.94 s and RMSE of 1.26 s.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146096483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cognitive IBNS Framework for Real-Time Defense Against TCP SYN Flood Attacks 基于认知IBNS的TCP SYN Flood实时防御框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70578
Sneh Kanwar Singh Sidhu, Sikander Singh Cheema

Intent-based networking (IBN) is an advanced approach that combines artificial intelligence (AI) and advanced machine learning (ML) technologies with automation for advanced network management by adapting the network's functions to an organization's business objectives. The concept of IBN is transforming autonomous network management with regard to cybersecurity. It improves the detection, response, and prevention of threats by encoding security policies into automated network actions and configurations. IBN enables adaptive, proactive, and enduring protection in sophisticated and evolving cybersecurity environments, thereby reinforcing cybersecurity resilience. This work proposes an integrated approach that combines IBN with ML and reinforcement learning (RL) to effectively defend against TCP SYN-based DDoS attacks. The ML model proposed in this research outputs a detection accuracy of 99.86%. Additionally, the RL-based response mitigation strategy improves response time by 43% in comparison to traditional reactive security methods. In the architecture IBNS framework, it actively updates high-level security intents into adaptive policies for near real time response to mitigate threats achieving less than 0.0008 false positive rate. Besides, the system is able to remain stable within the network while automating the response to the threats and enduring versatile attacks. This automated approach allows security to be aligned with operational objectives enabling proactive, adaptive, and multi-dimensional security. These findings mark the advancement that intelligent autonomous systems can provide towards cybersecurity in future network infrastructures, showing benefits of enhancing resilience through self-learning and intent-driven mechanisms.

基于意图的网络(IBN)是一种先进的方法,它将人工智能(AI)和先进的机器学习(ML)技术与先进网络管理的自动化相结合,通过使网络功能适应组织的业务目标。IBN的概念正在改变网络安全方面的自主网络管理。它通过将安全策略编码为自动化的网络操作和配置来改进威胁的检测、响应和预防。IBN能够在复杂和不断发展的网络安全环境中实现自适应、主动和持久的保护,从而增强网络安全弹性。这项工作提出了一种将IBN与ML和强化学习(RL)相结合的集成方法,以有效防御基于TCP syn的DDoS攻击。本研究提出的ML模型的检测准确率为99.86%。此外,与传统的响应式安全方法相比,基于rl的响应缓解策略可将响应时间缩短43%。在体系结构IBNS框架中,它主动将高级安全意图更新为自适应策略,以实现近实时响应,以减轻威胁,实现低于0.0008的误报率。此外,该系统能够在网络内保持稳定,同时自动响应威胁和承受多种攻击。这种自动化的方法允许安全性与操作目标保持一致,从而支持主动的、自适应的和多维的安全性。这些发现标志着智能自主系统可以为未来网络基础设施的网络安全提供进步,显示出通过自我学习和意图驱动机制增强弹性的好处。
{"title":"A Cognitive IBNS Framework for Real-Time Defense Against TCP SYN Flood Attacks","authors":"Sneh Kanwar Singh Sidhu,&nbsp;Sikander Singh Cheema","doi":"10.1002/cpe.70578","DOIUrl":"https://doi.org/10.1002/cpe.70578","url":null,"abstract":"<div>\u0000 \u0000 <p>Intent-based networking (IBN) is an advanced approach that combines artificial intelligence (AI) and advanced machine learning (ML) technologies with automation for advanced network management by adapting the network's functions to an organization's business objectives. The concept of IBN is transforming autonomous network management with regard to cybersecurity. It improves the detection, response, and prevention of threats by encoding security policies into automated network actions and configurations. IBN enables adaptive, proactive, and enduring protection in sophisticated and evolving cybersecurity environments, thereby reinforcing cybersecurity resilience. This work proposes an integrated approach that combines IBN with ML and reinforcement learning (RL) to effectively defend against TCP SYN-based DDoS attacks. The ML model proposed in this research outputs a detection accuracy of 99.86%. Additionally, the RL-based response mitigation strategy improves response time by 43% in comparison to traditional reactive security methods. In the architecture IBNS framework, it actively updates high-level security intents into adaptive policies for near real time response to mitigate threats achieving less than 0.0008 false positive rate. Besides, the system is able to remain stable within the network while automating the response to the threats and enduring versatile attacks. This automated approach allows security to be aligned with operational objectives enabling proactive, adaptive, and multi-dimensional security. These findings mark the advancement that intelligent autonomous systems can provide towards cybersecurity in future network infrastructures, showing benefits of enhancing resilience through self-learning and intent-driven mechanisms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak Memory Model Formalisms: Introduction and Survey 弱记忆模型的形式化:介绍与综述
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70484
Roger C. Su, Robert J. Colvin

Memory models define the order in which accesses to shared memory in a concurrent system may be observed to occur. Such models are a necessity since program order is not a reliable indicator of execution order, due to microarchitectural features or compiler transformations. Concurrent programming, already a challenging task, is thus made even harder when weak memory effects must be addressed. A rigorous specification of weak memory models is therefore essential to make this problem tractable for developers of safety- and security-critical, low-level software. In this paper we survey the field of formalisations of weak memory models, including their specification, their effects on execution, and tools and inference systems for reasoning about code. To assist the discussion we also provide an introduction to two styles of formal representation found commonly in the literature (using a much simplified version of Intel's x86 as the example): a step-by-step construction of traces of the system (operational semantics); and with respect to relations between memory events (axiomatic semantics). The survey covers some long-standing hardware features that lead to observable weak behaviours, a description of historical developments in practice and in theory, an overview of computability and complexity results, and outlines current and future directions in the field.

内存模型定义了并发系统中访问共享内存的顺序。这样的模型是必要的,因为由于微架构特性或编译器转换,程序顺序不是执行顺序的可靠指示器。并发编程已经是一项具有挑战性的任务,因此,当必须解决弱内存影响时,它变得更加困难。因此,弱内存模型的严格规范对于安全性和安全性关键的低级软件开发人员来说是必不可少的。在本文中,我们概述了弱内存模型的形式化领域,包括它们的规范,它们对执行的影响,以及用于对代码进行推理的工具和推理系统。为了帮助讨论,我们还介绍了文献中常见的两种形式表示风格(以英特尔x86的简化版本为例):系统跟踪的逐步构建(操作语义);以及记忆事件之间的关系(公理语义)该调查涵盖了一些长期存在的硬件特性,这些特性导致了可观察到的弱行为,描述了实践和理论中的历史发展,概述了可计算性和复杂性结果,并概述了该领域当前和未来的方向。
{"title":"Weak Memory Model Formalisms: Introduction and Survey","authors":"Roger C. Su,&nbsp;Robert J. Colvin","doi":"10.1002/cpe.70484","DOIUrl":"https://doi.org/10.1002/cpe.70484","url":null,"abstract":"<div>\u0000 \u0000 <p>Memory models define the order in which accesses to shared memory in a concurrent system may be observed to occur. Such models are a necessity since <i>program order</i> is not a reliable indicator of <i>execution order</i>, due to microarchitectural features or compiler transformations. Concurrent programming, already a challenging task, is thus made even harder when weak memory effects must be addressed. A rigorous specification of weak memory models is therefore essential to make this problem tractable for developers of safety- and security-critical, low-level software. In this paper we survey the field of formalisations of weak memory models, including their specification, their effects on execution, and tools and inference systems for reasoning about code. To assist the discussion we also provide an introduction to two styles of formal representation found commonly in the literature (using a much simplified version of Intel's x86 as the example): a step-by-step construction of traces of the system (<i>operational semantics</i>); and with respect to relations between memory events (<i>axiomatic semantics</i>). The survey covers some long-standing hardware features that lead to observable weak behaviours, a description of historical developments in practice and in theory, an overview of computability and complexity results, and outlines current and future directions in the field.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146096484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental Similarity-Based Label Propagation Algorithm for Dynamic Community Detection 基于增量相似度的标签传播动态社区检测算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70559
Asma Douadi, Nadjet Kamel, Lakhdar Sais

We propose an incremental similarity-based label propagation algorithm (DLPA-S) for detecting dynamic community structures. As the network evolves, the method efficiently updates the communities over time via local label updates driven by changes in network topology—including edge and vertex additions or removals—and vertex similarity. This incremental approach significantly reduces computational cost while preserving accuracy in capturing community evolution. We evaluate DLPA-S using a comprehensive set of quality metrics that assess both the structural properties of the network and the agreement between detected communities and ground-truth partitions. Experiments are conducted on synthetic and real-world dynamic networks, varying key graph characteristics such as the number of vertices and the average degree, as well as across diverse community scenarios. The results show that DLPA-S consistently achieves stable and high-performing results, maintains high NMI and F1 scores, ensures strong internal connectivity, clear community separability, and avoids disconnected communities, while remaining computationally efficient.

我们提出了一种基于增量相似度的标签传播算法(DLPA-S)来检测动态社区结构。随着网络的发展,该方法通过网络拓扑变化(包括边和顶点的添加或删除)和顶点相似性驱动的局部标签更新,随着时间的推移有效地更新社区。这种增量方法显著降低了计算成本,同时保持了捕获群落演化的准确性。我们使用一套全面的质量指标来评估DLPA-S,这些指标评估了网络的结构特性以及检测到的社区和ground-truth分区之间的一致性。实验在合成和现实世界的动态网络上进行,不同的关键图形特征,如顶点数量和平均度,以及不同的社区场景。结果表明,DLPA-S在保持计算效率的同时,始终保持稳定和高性能的结果,保持较高的NMI和F1分数,保证了强大的内部连通性,明确的社区可分性,避免了社区的断开。
{"title":"Incremental Similarity-Based Label Propagation Algorithm for Dynamic Community Detection","authors":"Asma Douadi,&nbsp;Nadjet Kamel,&nbsp;Lakhdar Sais","doi":"10.1002/cpe.70559","DOIUrl":"https://doi.org/10.1002/cpe.70559","url":null,"abstract":"<div>\u0000 \u0000 <p>We propose an incremental similarity-based label propagation algorithm (DLPA-S) for detecting dynamic community structures. As the network evolves, the method efficiently updates the communities over time via local label updates driven by changes in network topology—including edge and vertex additions or removals—and vertex similarity. This incremental approach significantly reduces computational cost while preserving accuracy in capturing community evolution. We evaluate DLPA-S using a comprehensive set of quality metrics that assess both the structural properties of the network and the agreement between detected communities and ground-truth partitions. Experiments are conducted on synthetic and real-world dynamic networks, varying key graph characteristics such as the number of vertices and the average degree, as well as across diverse community scenarios. The results show that DLPA-S consistently achieves stable and high-performing results, maintains high NMI and F1 scores, ensures strong internal connectivity, clear community separability, and avoids disconnected communities, while remaining computationally efficient.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1