首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Performance and Cost Evaluation of StarPU on AWS: Case Studies With Dense Linear Algebra Kernels and N-Body Simulations 基于AWS的StarPU性能和成本评估:基于密集线性代数核和n体模拟的案例研究
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70582
Vanderlei Munhoz, Vinicius G. Pinto, João V. F. Lima, Márcio Castro, Daniel Cordeiro, Emilio Francesquini

Task-based programming interfaces introduce a paradigm in which computations are decomposed into fine-grained units of work known as “tasks”. StarPU is a runtime system originally developed to support task-based parallelism on on-premise heterogeneous architectures by abstracting low-level hardware details and efficiently managing resource scheduling. It enables developers to express applications as task graphs with explicit data dependencies, which are then dynamically scheduled across available processing units, such as CPUs and GPUs. In recent years, major cloud providers have begun offering virtual machines equipped with both CPUs and GPUs, allowing researchers to deploy and execute parallel workloads in virtual heterogeneous clusters. However, the performance and cost effectiveness of executing StarPU-based applications in public cloud environments remain unclear, particularly due to variability in hardware configurations, network performance, ever-changing pricing models, and computing performance due to virtualization and multi-tenancy. In this paper, we evaluate the performance and cost-efficiency of StarPU on Amazon Elastic Compute Cloud (EC2) using dense linear algebra kernels and N-Body simulations as case studies. Our experiments consider different cluster configurations, including powerful and more expensive instances with four NVIDIA GPUs per node (which we refer to as “fat nodes”), and less powerful and lower-cost instances with a single NVIDIA GPU per node (which we refer to as “thin nodes”). Our results show that arithmetic precision affects the performance–cost trade-off for dense linear algebra applications, whereas N-Body simulations consistently achieve better cost-efficiency on thin-node clusters. These findings underscore the challenges of optimizing HPC workloads for performance and cost in cloud environments.

基于任务的编程接口引入了一种范式,在这种范式中,计算被分解为称为“任务”的细粒度工作单元。StarPU是一个运行时系统,最初是通过抽象底层硬件细节和有效管理资源调度来支持基于任务的本地异构架构的并行性。它使开发人员能够将应用程序表示为具有显式数据依赖关系的任务图,然后在可用的处理单元(如cpu和gpu)之间动态调度。近年来,主要的云提供商已经开始提供配备cpu和gpu的虚拟机,允许研究人员在虚拟异构集群中部署和执行并行工作负载。然而,在公共云环境中执行基于starpu的应用程序的性能和成本效益仍然不清楚,特别是由于硬件配置、网络性能、不断变化的定价模型以及虚拟化和多租户导致的计算性能的可变性。在本文中,我们使用密集线性代数核和N-Body模拟作为案例研究,评估了StarPU在Amazon Elastic Compute Cloud (EC2)上的性能和成本效率。我们的实验考虑了不同的集群配置,包括每个节点具有四个NVIDIA GPU的强大且更昂贵的实例(我们称之为“胖节点”),以及每个节点具有单个NVIDIA GPU的功能较弱且成本较低的实例(我们称之为“瘦节点”)。我们的研究结果表明,算法精度影响密集线性代数应用的性能成本权衡,而N-Body模拟在瘦节点集群上始终获得更好的成本效益。这些发现强调了在云环境中优化高性能计算工作负载的性能和成本所面临的挑战。
{"title":"Performance and Cost Evaluation of StarPU on AWS: Case Studies With Dense Linear Algebra Kernels and N-Body Simulations","authors":"Vanderlei Munhoz,&nbsp;Vinicius G. Pinto,&nbsp;João V. F. Lima,&nbsp;Márcio Castro,&nbsp;Daniel Cordeiro,&nbsp;Emilio Francesquini","doi":"10.1002/cpe.70582","DOIUrl":"10.1002/cpe.70582","url":null,"abstract":"<p>Task-based programming interfaces introduce a paradigm in which computations are decomposed into fine-grained units of work known as “tasks”. StarPU is a runtime system originally developed to support task-based parallelism on on-premise heterogeneous architectures by abstracting low-level hardware details and efficiently managing resource scheduling. It enables developers to express applications as task graphs with explicit data dependencies, which are then dynamically scheduled across available processing units, such as CPUs and GPUs. In recent years, major cloud providers have begun offering virtual machines equipped with both CPUs and GPUs, allowing researchers to deploy and execute parallel workloads in virtual heterogeneous clusters. However, the performance and cost effectiveness of executing StarPU-based applications in public cloud environments remain unclear, particularly due to variability in hardware configurations, network performance, ever-changing pricing models, and computing performance due to virtualization and multi-tenancy. In this paper, we evaluate the performance and cost-efficiency of StarPU on Amazon Elastic Compute Cloud (EC2) using dense linear algebra kernels and N-Body simulations as case studies. Our experiments consider different cluster configurations, including powerful and more expensive instances with four NVIDIA GPUs per node (which we refer to as “fat nodes”), and less powerful and lower-cost instances with a single NVIDIA GPU per node (which we refer to as “thin nodes”). Our results show that arithmetic precision affects the performance–cost trade-off for dense linear algebra applications, whereas N-Body simulations consistently achieve better cost-efficiency on thin-node clusters. These findings underscore the challenges of optimizing HPC workloads for performance and cost in cloud environments.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70582","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Deep Learning Model for Multiclass Brain Tumor Classification Using MRI Images With Triple Explainability 基于三重可解释性MRI图像的脑肿瘤分类深度学习模型
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70548
Kashif Mazhar, Pragya Dwivedi, Vibhor Kant

Brain tumors (BT) are considered a major health challenge around the world, which needs early detection; therefore, effective treatment strategies can be planned. This kind of cancer greatly diminishes the patient's quality and lifespan, which opens a gateway for early diagnosis and effective treatment. The medical professionals need assistance with this difficult and error-prone process; it is also mandatory to augment the interpretability and accuracy of the recognition model. To achieve such a goal, a hybrid deep learning model superior with explainable AI is introduced in the proposed framework, which performs brain tumor classification and model interpretation from MRI. The proposed study involves four key steps: Pre-processing, segmentation, classification, and analysis. The input images are initially pre-processed using a median-boosted Kuan Filtering (Me-KF) to remove any noise in the data and improve the subsequent segmentation procedure. After pre-processing, the Extended Multi-Inception Attention U-Net (ExMIA_U-Net) technique is added to effectively separate the brain tumor region. Finally, a deep learning method based on Convolution Attentive assisted EfficientNetB0 (CA-EfficientNetB0) is presented to categorize the many categories of brain tumors, comprising gliomas, meningiomas, pituitary tumors and normal tumors. This model uses Shapley additive explanation (SHAP), Local interpretable model-agnostic explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretation. The proposed model uses a brain tumor classification dataset. In the results section, the proposed model is compared to many other prevailing schemes and it achieves 99.45% accuracy, 99.16% precision, 98.97% recall, and a 99.06% F1-score. The results show that an efficient, interpretable, robust and better model is developed for brain tumor classification.

脑肿瘤(BT)被认为是世界各地的一个重大健康挑战,需要及早发现;因此,可以制定有效的治疗策略。这类癌症大大降低了患者的生活质量和寿命,为早期诊断和有效治疗打开了大门。医疗专业人员在这一困难和容易出错的过程中需要帮助;增强识别模型的可解释性和准确性也是必须的。为了实现这一目标,在提出的框架中引入了具有可解释AI的混合深度学习模型,该模型可以从MRI中进行脑肿瘤分类和模型解释。该研究包括四个关键步骤:预处理、分割、分类和分析。输入图像最初使用中值增强宽滤波(Me-KF)进行预处理,以去除数据中的任何噪声并改进随后的分割过程。在预处理后,加入扩展多inception注意U-Net (ExMIA_U-Net)技术,有效分离脑肿瘤区域。最后,提出了一种基于卷积细心辅助高效netb0 (CA-EfficientNetB0)的深度学习方法,用于脑肿瘤的分类,包括胶质瘤、脑膜瘤、垂体瘤和正常肿瘤。该模型使用Shapley加性解释(SHAP)、局部可解释模型不可知解释(LIME)和梯度加权类激活映射(Grad-CAM)进行模型解释。该模型使用脑肿瘤分类数据集。在结果部分,将所提出的模型与许多其他流行方案进行了比较,结果表明,该模型达到了99.45%的准确率、99.16%的精度、98.97%的召回率和99.06%的f1分数。结果表明,建立了一种高效、可解释、鲁棒性较好的脑肿瘤分类模型。
{"title":"An Efficient Deep Learning Model for Multiclass Brain Tumor Classification Using MRI Images With Triple Explainability","authors":"Kashif Mazhar,&nbsp;Pragya Dwivedi,&nbsp;Vibhor Kant","doi":"10.1002/cpe.70548","DOIUrl":"https://doi.org/10.1002/cpe.70548","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors (BT) are considered a major health challenge around the world, which needs early detection; therefore, effective treatment strategies can be planned. This kind of cancer greatly diminishes the patient's quality and lifespan, which opens a gateway for early diagnosis and effective treatment. The medical professionals need assistance with this difficult and error-prone process; it is also mandatory to augment the interpretability and accuracy of the recognition model. To achieve such a goal, a hybrid deep learning model superior with explainable AI is introduced in the proposed framework, which performs brain tumor classification and model interpretation from MRI. The proposed study involves four key steps: Pre-processing, segmentation, classification, and analysis. The input images are initially pre-processed using a median-boosted Kuan Filtering (Me-KF) to remove any noise in the data and improve the subsequent segmentation procedure. After pre-processing, the Extended Multi-Inception Attention U-Net (ExMIA_U-Net) technique is added to effectively separate the brain tumor region. Finally, a deep learning method based on Convolution Attentive assisted EfficientNetB0 (CA-EfficientNetB0) is presented to categorize the many categories of brain tumors, comprising gliomas, meningiomas, pituitary tumors and normal tumors. This model uses Shapley additive explanation (SHAP), Local interpretable model-agnostic explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretation. The proposed model uses a brain tumor classification dataset. In the results section, the proposed model is compared to many other prevailing schemes and it achieves 99.45% accuracy, 99.16% precision, 98.97% recall, and a 99.06% F1-score. The results show that an efficient, interpretable, robust and better model is developed for brain tumor classification.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MADS-UC: Finding Key Users in Online Social Networks Through Users Activation Activeness and Combination Weighting MCDM MADS-UC:基于用户激活活跃度和组合加权MCDM的在线社交网络关键用户挖掘
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70566
Pingle Yang, Laijun Zhao, Fanyuan Meng, Huiyong Li, Lixin Zhou, Chen Dong

In online social networks, identifying influential users is crucial for maintaining network stability and accelerating information dissemination. However, most of the existing research evaluate the influence of users according to the topological structure of local or global networks, which ignores the historical information and social interactions of users. In this work, we introduce a novel algorithm called MADS-UC to pinpoint the influential super-spreaders of information. Specifically, three kinds of activation influence between each pair of users are defined to better portray the mutual interactions with each other, and the proposed measure makes full consideration of the topological and historical information of networks, and the social interactions of users. Then, a hybrid multi-attribute decision-making algorithm is put forward to evaluate the influence of users, while the obtained three kinds of activation influence are introduced as basic indicators, and the weight for each indicator is determined from both subjective and objective dimensions. Finally, to achieve a good balance between algorithm accuracy and time complexity, the influential users are identified by considering the interactions between a user and other users within its influence range. We collect seven real-world datasets from the Sina Weibo platform in 2024 about mobile product Honor, and a series of experiments is conducted to validate the effectiveness of the MADS-UC algorithm. Experimental results show that MADS-UC surpasses six widely used algorithms in robustness, sensitivity analysis, and distinguishing capability, which is useful in accelerating the dissemination of information in the product marketing process.

在在线社交网络中,识别有影响力的用户对于维护网络稳定和加速信息传播至关重要。然而,现有的研究大多是根据局部或全局网络的拓扑结构来评估用户的影响,而忽略了用户的历史信息和社会互动。在这项工作中,我们引入了一种称为MADS-UC的新算法来确定有影响力的信息超级传播者。具体而言,定义了每对用户之间的三种激活影响,以更好地描述彼此之间的交互,并且所提出的度量充分考虑了网络的拓扑和历史信息以及用户的社会交互。然后,提出了一种混合多属性决策算法来评价用户的影响,并将得到的三种激活影响作为基本指标,从主观和客观两个维度确定各指标的权重。最后,为了在算法精度和时间复杂度之间取得良好的平衡,通过考虑用户与其影响范围内其他用户之间的交互来识别有影响力的用户。我们从2024年的新浪微博平台上收集了7个关于移动产品荣耀的真实数据集,并进行了一系列实验来验证MADS-UC算法的有效性。实验结果表明,MADS-UC算法在鲁棒性、灵敏度分析和区分能力等方面均优于6种常用算法,有助于加快产品营销过程中的信息传播。
{"title":"MADS-UC: Finding Key Users in Online Social Networks Through Users Activation Activeness and Combination Weighting MCDM","authors":"Pingle Yang,&nbsp;Laijun Zhao,&nbsp;Fanyuan Meng,&nbsp;Huiyong Li,&nbsp;Lixin Zhou,&nbsp;Chen Dong","doi":"10.1002/cpe.70566","DOIUrl":"10.1002/cpe.70566","url":null,"abstract":"<div>\u0000 \u0000 <p>In online social networks, identifying influential users is crucial for maintaining network stability and accelerating information dissemination. However, most of the existing research evaluate the influence of users according to the topological structure of local or global networks, which ignores the historical information and social interactions of users. In this work, we introduce a novel algorithm called MADS-UC to pinpoint the influential super-spreaders of information. Specifically, three kinds of activation influence between each pair of users are defined to better portray the mutual interactions with each other, and the proposed measure makes full consideration of the topological and historical information of networks, and the social interactions of users. Then, a hybrid multi-attribute decision-making algorithm is put forward to evaluate the influence of users, while the obtained three kinds of activation influence are introduced as basic indicators, and the weight for each indicator is determined from both subjective and objective dimensions. Finally, to achieve a good balance between algorithm accuracy and time complexity, the influential users are identified by considering the interactions between a user and other users within its influence range. We collect seven real-world datasets from the Sina Weibo platform in 2024 about mobile product Honor, and a series of experiments is conducted to validate the effectiveness of the MADS-UC algorithm. Experimental results show that MADS-UC surpasses six widely used algorithms in robustness, sensitivity analysis, and distinguishing capability, which is useful in accelerating the dissemination of information in the product marketing process.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Deep Learning Model for Multiclass Brain Tumor Classification Using MRI Images With Triple Explainability 基于三重可解释性MRI图像的脑肿瘤分类深度学习模型
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70548
Kashif Mazhar, Pragya Dwivedi, Vibhor Kant

Brain tumors (BT) are considered a major health challenge around the world, which needs early detection; therefore, effective treatment strategies can be planned. This kind of cancer greatly diminishes the patient's quality and lifespan, which opens a gateway for early diagnosis and effective treatment. The medical professionals need assistance with this difficult and error-prone process; it is also mandatory to augment the interpretability and accuracy of the recognition model. To achieve such a goal, a hybrid deep learning model superior with explainable AI is introduced in the proposed framework, which performs brain tumor classification and model interpretation from MRI. The proposed study involves four key steps: Pre-processing, segmentation, classification, and analysis. The input images are initially pre-processed using a median-boosted Kuan Filtering (Me-KF) to remove any noise in the data and improve the subsequent segmentation procedure. After pre-processing, the Extended Multi-Inception Attention U-Net (ExMIA_U-Net) technique is added to effectively separate the brain tumor region. Finally, a deep learning method based on Convolution Attentive assisted EfficientNetB0 (CA-EfficientNetB0) is presented to categorize the many categories of brain tumors, comprising gliomas, meningiomas, pituitary tumors and normal tumors. This model uses Shapley additive explanation (SHAP), Local interpretable model-agnostic explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretation. The proposed model uses a brain tumor classification dataset. In the results section, the proposed model is compared to many other prevailing schemes and it achieves 99.45% accuracy, 99.16% precision, 98.97% recall, and a 99.06% F1-score. The results show that an efficient, interpretable, robust and better model is developed for brain tumor classification.

脑肿瘤(BT)被认为是世界各地的一个重大健康挑战,需要及早发现;因此,可以制定有效的治疗策略。这类癌症大大降低了患者的生活质量和寿命,为早期诊断和有效治疗打开了大门。医疗专业人员在这一困难和容易出错的过程中需要帮助;增强识别模型的可解释性和准确性也是必须的。为了实现这一目标,在提出的框架中引入了具有可解释AI的混合深度学习模型,该模型可以从MRI中进行脑肿瘤分类和模型解释。该研究包括四个关键步骤:预处理、分割、分类和分析。输入图像最初使用中值增强宽滤波(Me-KF)进行预处理,以去除数据中的任何噪声并改进随后的分割过程。在预处理后,加入扩展多inception注意U-Net (ExMIA_U-Net)技术,有效分离脑肿瘤区域。最后,提出了一种基于卷积细心辅助高效netb0 (CA-EfficientNetB0)的深度学习方法,用于脑肿瘤的分类,包括胶质瘤、脑膜瘤、垂体瘤和正常肿瘤。该模型使用Shapley加性解释(SHAP)、局部可解释模型不可知解释(LIME)和梯度加权类激活映射(Grad-CAM)进行模型解释。该模型使用脑肿瘤分类数据集。在结果部分,将所提出的模型与许多其他流行方案进行了比较,结果表明,该模型达到了99.45%的准确率、99.16%的精度、98.97%的召回率和99.06%的f1分数。结果表明,建立了一种高效、可解释、鲁棒性较好的脑肿瘤分类模型。
{"title":"An Efficient Deep Learning Model for Multiclass Brain Tumor Classification Using MRI Images With Triple Explainability","authors":"Kashif Mazhar,&nbsp;Pragya Dwivedi,&nbsp;Vibhor Kant","doi":"10.1002/cpe.70548","DOIUrl":"https://doi.org/10.1002/cpe.70548","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors (BT) are considered a major health challenge around the world, which needs early detection; therefore, effective treatment strategies can be planned. This kind of cancer greatly diminishes the patient's quality and lifespan, which opens a gateway for early diagnosis and effective treatment. The medical professionals need assistance with this difficult and error-prone process; it is also mandatory to augment the interpretability and accuracy of the recognition model. To achieve such a goal, a hybrid deep learning model superior with explainable AI is introduced in the proposed framework, which performs brain tumor classification and model interpretation from MRI. The proposed study involves four key steps: Pre-processing, segmentation, classification, and analysis. The input images are initially pre-processed using a median-boosted Kuan Filtering (Me-KF) to remove any noise in the data and improve the subsequent segmentation procedure. After pre-processing, the Extended Multi-Inception Attention U-Net (ExMIA_U-Net) technique is added to effectively separate the brain tumor region. Finally, a deep learning method based on Convolution Attentive assisted EfficientNetB0 (CA-EfficientNetB0) is presented to categorize the many categories of brain tumors, comprising gliomas, meningiomas, pituitary tumors and normal tumors. This model uses Shapley additive explanation (SHAP), Local interpretable model-agnostic explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretation. The proposed model uses a brain tumor classification dataset. In the results section, the proposed model is compared to many other prevailing schemes and it achieves 99.45% accuracy, 99.16% precision, 98.97% recall, and a 99.06% F1-score. The results show that an efficient, interpretable, robust and better model is developed for brain tumor classification.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction 面向跨城市交通预测的个性化联邦时空建模方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1002/cpe.70577
Ying Wang, Renjie Fan, Bo Gong, Hong Wen, Yuanxi Yu

For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.

对于跨城市交通预测,由于城市间交通数据的显著异质性和对隐私保护的要求,使得传统的集中式时空图建模技术难以平衡预测性能和数据安全性。为此,本文提出了一种专门为跨城市交通预测设计的个性化联邦时空建模方法AT-SPNet。该方法通过构建一个共享的时间分支和一个隐藏的局部空间分支来解耦时空建模路径,从而在保护隐私的同时减轻了跨城市交通数据的异质性。在时间分支中,采用门控循环单元和多头注意机制来捕获时间依赖性,并采用挤压和激励模块来增强信息特征的提取。在空间分支中,设计了基于三注意机制的空间注意融合模块,从多个空间视角捕捉空间特征,结合静态图卷积和动态图注意构建双模态信息融合路径。此外,为了缓解跨城市数据异构对联邦训练的不利影响,提出了一种个性化的联邦学习策略,在不共享原始数据的情况下实现客户端空间特征的差异化融合。在4个真实交通数据集上的实验表明,AT-SPNet在预测精度和跨城市泛化方面都优于现有方法,验证了该方法在跨城市交通预测中的有效性和实用性。
{"title":"AT-SPNet: A Personalized Federated Spatio-Temporal Modeling Method for Cross-City Traffic Prediction","authors":"Ying Wang,&nbsp;Renjie Fan,&nbsp;Bo Gong,&nbsp;Hong Wen,&nbsp;Yuanxi Yu","doi":"10.1002/cpe.70577","DOIUrl":"https://doi.org/10.1002/cpe.70577","url":null,"abstract":"<div>\u0000 \u0000 <p>For cross-city traffic prediction, the significant heterogeneity of traffic data across cities and the requirement for privacy protection make it challenging for conventional centralized spatiotemporal graph modeling techniques to balance predictive performance and data security. Therefore, this paper proposes AT-SPNet, a personalized federated spatiotemporal modeling approach specifically designed for cross-city traffic prediction. This method decouples the spatiotemporal modeling paths through the construction of a shared temporal branch and a hidden local spatial branch, thereby mitigating the heterogeneity of cross-city traffic data while preserving privacy. In the temporal branch, Gated Recurrent Units and a multi-head attention mechanism are incorporated to capture temporal dependencies, and a Squeeze-and-Excitation module is employed to enhance the extraction of informative features. In the spatial branch, a Spatial Attention Fusion module based on a triple-attention mechanism is designed to capture spatial features from multiple spatial perspectives, combined with static graph convolution and dynamic graph attention to construct a dual-modal information fusion path. Furthermore, to alleviate the adverse effects of cross-city data heterogeneity in federated training, a personalized federated learning strategy is introduced, which enables differentiated fusion of client spatial features without sharing raw data. Experiments on four real-world traffic datasets demonstrate that AT-SPNet outperforms existing methods in both prediction accuracy and cross-city generalization, validating the effectiveness and practical applicability of the proposed approach for cross-city traffic prediction.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ThreadMonitor: Low-Overhead Data Race Detection Using Intel Processor Trace 使用英特尔处理器跟踪的低开销数据竞争检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70517
Farzam Dorostkar, Michel Dagenais, Ankush Tyagi, Vince Bridgers

Data races are among the most difficult multithreading bugs to find, due to their non-deterministic nature. This and the increasing popularity of multithreaded programming have led to the need for practical automated data race detection. In this context, dynamic data race detectors have received more attention, compared to static tools, owing to their higher accuracy and scalability. Yet, state-of-the-art dynamic data race detectors cannot be used in many real-world testing scenarios, since they cause significant slowdown and memory overhead. Notably, ThreadSanitizer (TSan), the default dynamic data race detector in both clang and gcc compilers, is reported to typically impose a 5×$$ 5times $$15×$$ 15times $$ slowdown and a 5×$$ 5times $$10×$$ 10times $$ memory overhead, which is not tolerable in many industrial use cases. To address this issue, this paper introduces ThreadMonitor (TMon), a low-overhead postmortem data race detector for multithreaded C/C++ programs that use the Pthread library. At runtime, TMon traces the information required for detecting occurrences of data races (i.e., shared memory accesses and timing constraints among threads) using Intel Processor Trace (Intel PT), a non-intrusive hardware feature dedicated to tracing software execution. Thereafter, its postmortem analyzer examines the collected trace data to determine whether the traced program execution exhibited data races, performing a verification similar to that carried out by TSan at runtime. Introducing algorithmic improvements in its postmortem analyzer, TMon can further achieve a higher data race detection coverage compared to TSan. TMon has no direct data memory overhead, incurs minimal instruction memory overhead, and causes a very small slowdown, making it an ideal choice in test environments with limited resources.

由于数据竞争的不确定性,它是最难发现的多线程错误之一。这一点以及多线程编程的日益普及导致了对实际自动化数据争用检测的需求。在这种情况下,与静态工具相比,动态数据竞争检测器由于其更高的准确性和可扩展性而受到了更多的关注。然而,最先进的动态数据竞争检测器不能用于许多实际的测试场景,因为它们会导致显著的减速和内存开销。值得注意的是,ThreadSanitizer (TSan), clang和gcc编译器中默认的动态数据争用检测器,通常会施加5 × $$ 5times $$ - 15 × $$ 15times $$的减速和5 × $$ 5times $$ -10 × $$ 10times $$内存开销,这在许多工业用例中是不可容忍的。为了解决这个问题,本文介绍了ThreadMonitor (TMon),这是一个低开销的事后数据竞争检测器,适用于使用Pthread库的多线程C/ c++程序。在运行时,TMon使用Intel Processor Trace (Intel PT)跟踪检测数据竞争(即线程之间的共享内存访问和定时约束)发生所需的信息,这是一种专用于跟踪软件执行的非侵入性硬件特性。之后,它的事后分析程序检查收集的跟踪数据,以确定跟踪的程序执行是否显示了数据竞争,并执行类似于TSan在运行时执行的验证。在其事后分析分析器中引入算法改进,与TSan相比,TMon可以进一步实现更高的数据竞争检测覆盖率。TMon没有直接的数据内存开销,产生最小的指令内存开销,并且导致非常小的减速,使其成为资源有限的测试环境中的理想选择。
{"title":"ThreadMonitor: Low-Overhead Data Race Detection Using Intel Processor Trace","authors":"Farzam Dorostkar,&nbsp;Michel Dagenais,&nbsp;Ankush Tyagi,&nbsp;Vince Bridgers","doi":"10.1002/cpe.70517","DOIUrl":"https://doi.org/10.1002/cpe.70517","url":null,"abstract":"<p>Data races are among the most difficult multithreading bugs to find, due to their non-deterministic nature. This and the increasing popularity of multithreaded programming have led to the need for practical automated data race detection. In this context, dynamic data race detectors have received more attention, compared to static tools, owing to their higher accuracy and scalability. Yet, state-of-the-art dynamic data race detectors cannot be used in many real-world testing scenarios, since they cause significant slowdown and memory overhead. Notably, ThreadSanitizer (TSan), the default dynamic data race detector in both clang and gcc compilers, is reported to typically impose a <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 5times $$</annotation>\u0000 </semantics></math>–<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>15</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 15times $$</annotation>\u0000 </semantics></math> slowdown and a <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 5times $$</annotation>\u0000 </semantics></math>–<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>10</mn>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ 10times $$</annotation>\u0000 </semantics></math> memory overhead, which is not tolerable in many industrial use cases. To address this issue, this paper introduces ThreadMonitor (TMon), a low-overhead postmortem data race detector for multithreaded C/C++ programs that use the Pthread library. At runtime, TMon traces the information required for detecting occurrences of data races (i.e., shared memory accesses and timing constraints among threads) using Intel Processor Trace (Intel PT), a non-intrusive hardware feature dedicated to tracing software execution. Thereafter, its postmortem analyzer examines the collected trace data to determine whether the traced program execution exhibited data races, performing a verification similar to that carried out by TSan at runtime. Introducing algorithmic improvements in its postmortem analyzer, TMon can further achieve a higher data race detection coverage compared to TSan. TMon has no direct data memory overhead, incurs minimal instruction memory overhead, and causes a very small slowdown, making it an ideal choice in test environments with limited resources.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Performance Analysis of RPC Frameworks in Public Cloud Environments 公有云环境下RPC框架性能对比分析
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70523
Grzegorz Blinowski, Bartłomiej Pełka

Remote procedure call (RPC) technology has become a cornerstone of modern cloud computing, enabling efficient and seamless communication between distributed services. In cloud infrastructures, where scalability, interoperability, and performance are critical, RPC frameworks play a key role in abstracting the complexities of network communication. Despite their ubiquity, relatively little up-to-date empirical research exists on the comparative performance of RPC frameworks across cloud environments. And, to our best knowledge, there is no comparative study in which different cloud platforms and RPC frameworks were directly compared. This paper addresses that gap by presenting the results of a series of experiments evaluating the performance and scalability of four major RPC frameworks—ONC RPC, gRPC, Web-RPC, and JSON-RPC—across the three most widely used cloud platforms: AWS, Azure, and Google Cloud. The experiments were based on a test suite comprising four distinct RPC call types with varying argument sizes and complexities. Each configuration was executed multiple times with client loads ranging from 1 to 8, with 300,000 runs per test. Total call latency was used as the primary performance measure and analyzed statistically. The results reveal a nuanced picture: while ONC RPC consistently delivers the best performance, no other framework or platform emerges as a clear overall leader. Across all cloud environments and workloads, ONC RPC consistently outperformed the other frameworks. It proved to be at least 20% faster than its nearest competitor and, in some cases, up to four times faster than the second best. By contrast, in short-argument tests, gRPC performed unexpectedly poorly; it often ranked close to or below the text-based frameworks. Particularly surprising was its poor performance under high load in Google Cloud—the platform where it could be expected to perform best. However, gRPC ranked second in the large-argument tests, while text-based frameworks show relatively poor performance as the argument size increases. We discuss and explain these findings in detail and provide guidelines for selecting the most suitable RPC technology for different use cases.

远程过程调用(RPC)技术已经成为现代云计算的基石,实现了分布式服务之间高效、无缝的通信。在可伸缩性、互操作性和性能至关重要的云基础设施中,RPC框架在抽象网络通信的复杂性方面发挥着关键作用。尽管RPC框架无处不在,但关于跨云环境的RPC框架性能比较的最新实证研究相对较少。而且,据我们所知,还没有直接比较不同云平台和RPC框架的比较研究。本文通过展示一系列实验的结果来解决这一差距,这些实验评估了四个主要RPC框架(onc RPC、gRPC、Web-RPC和json -RPC)在三个最广泛使用的云平台(AWS、Azure和谷歌cloud)上的性能和可扩展性。实验基于一个测试套件,该测试套件包含四种不同的RPC调用类型,具有不同的参数大小和复杂性。每个配置执行多次,客户机负载从1到8不等,每个测试运行30万次。总呼叫延迟被用作主要性能度量并进行统计分析。结果揭示了一个微妙的情况:虽然ONC RPC始终提供最佳性能,但没有其他框架或平台成为明显的整体领导者。在所有云环境和工作负载中,ONC RPC始终优于其他框架。事实证明,它比最接近的竞争对手至少快20%,在某些情况下,比第二名快四倍。相比之下,在短参数测试中,gRPC的表现出乎意料地糟糕;它的排名通常接近或低于基于文本的框架。尤其令人惊讶的是,它在谷歌云平台的高负载下表现不佳,而谷歌云平台本应是它表现最好的平台。然而,gRPC在大参数测试中排名第二,而基于文本的框架随着参数大小的增加表现出相对较差的性能。我们将详细讨论和解释这些发现,并提供针对不同用例选择最合适RPC技术的指导方针。
{"title":"Comparative Performance Analysis of RPC Frameworks in Public Cloud Environments","authors":"Grzegorz Blinowski,&nbsp;Bartłomiej Pełka","doi":"10.1002/cpe.70523","DOIUrl":"https://doi.org/10.1002/cpe.70523","url":null,"abstract":"<div>\u0000 \u0000 <p>Remote procedure call (RPC) technology has become a cornerstone of modern cloud computing, enabling efficient and seamless communication between distributed services. In cloud infrastructures, where scalability, interoperability, and performance are critical, RPC frameworks play a key role in abstracting the complexities of network communication. Despite their ubiquity, relatively little up-to-date empirical research exists on the comparative performance of RPC frameworks across cloud environments. And, to our best knowledge, there is no comparative study in which different cloud platforms and RPC frameworks were directly compared. This paper addresses that gap by presenting the results of a series of experiments evaluating the performance and scalability of four major RPC frameworks—ONC RPC, gRPC, Web-RPC, and JSON-RPC—across the three most widely used cloud platforms: AWS, Azure, and Google Cloud. The experiments were based on a test suite comprising four distinct RPC call types with varying argument sizes and complexities. Each configuration was executed multiple times with client loads ranging from 1 to 8, with 300,000 runs per test. Total call latency was used as the primary performance measure and analyzed statistically. The results reveal a nuanced picture: while ONC RPC consistently delivers the best performance, no other framework or platform emerges as a clear overall leader. Across all cloud environments and workloads, ONC RPC consistently outperformed the other frameworks. It proved to be at least 20% faster than its nearest competitor and, in some cases, up to four times faster than the second best. By contrast, in short-argument tests, gRPC performed unexpectedly poorly; it often ranked close to or below the text-based frameworks. Particularly surprising was its poor performance under high load in Google Cloud—the platform where it could be expected to perform best. However, gRPC ranked second in the large-argument tests, while text-based frameworks show relatively poor performance as the argument size increases. We discuss and explain these findings in detail and provide guidelines for selecting the most suitable RPC technology for different use cases.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lattice-Based Public Auditing Schemes for Cloud Storage Security: A Comprehensive Survey 基于栅格的云存储安全公共审计方案综述
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70556
Renuka Cheeturi, Syam Kumar Pasupuleti, Rashmi Ranjan Rout

Public auditing is a method used to verify the integrity of data stored in the cloud without requiring access to the actual data. However, the advancement of quantum computers poses significant security threats to existing public auditing schemes, as these schemes are based on conventional cryptography hard problems, which are vulnerable to quantum attacks. To address this, NIST has launched the development of post-quantum cryptographic primitives and protocols. Among the various approaches, lattice-based cryptography (LBC) is considered one of the most promising candidates due to its strong security guarantees and inherent resistance to quantum attacks. Leveraging LBC, several researchers have proposed lattice-based public auditing (LBPA) schemes for cloud storage security based on lattice hardness assumptions. This paper provides a comprehensive survey of existing LBPA schemes for cloud storage, presenting a detailed taxonomy and analyzing their similarities, differences, and performance. Additionally, it highlights key challenges and outlines future research directions for designing efficient and secure public auditing schemes in the post-quantum era.

公共审计是一种在不需要访问实际数据的情况下验证存储在云中的数据完整性的方法。然而,量子计算机的进步对现有的公共审计方案构成了重大的安全威胁,因为这些方案基于传统的密码难题,容易受到量子攻击。为了解决这个问题,NIST已经启动了后量子加密原语和协议的开发。在各种方法中,基于格的加密(LBC)由于其强大的安全性保证和对量子攻击的固有抵抗力而被认为是最有前途的候选方法之一。利用LBC,一些研究人员提出了基于点阵硬度假设的云存储安全的基于点阵的公共审计(LBPA)方案。本文全面概述了现有的云存储LBPA方案,给出了详细的分类,并分析了它们的异同和性能。此外,它还强调了在后量子时代设计高效、安全的公共审计方案的关键挑战和未来研究方向。
{"title":"Lattice-Based Public Auditing Schemes for Cloud Storage Security: A Comprehensive Survey","authors":"Renuka Cheeturi,&nbsp;Syam Kumar Pasupuleti,&nbsp;Rashmi Ranjan Rout","doi":"10.1002/cpe.70556","DOIUrl":"https://doi.org/10.1002/cpe.70556","url":null,"abstract":"<div>\u0000 \u0000 <p>Public auditing is a method used to verify the integrity of data stored in the cloud without requiring access to the actual data. However, the advancement of quantum computers poses significant security threats to existing public auditing schemes, as these schemes are based on conventional cryptography hard problems, which are vulnerable to quantum attacks. To address this, NIST has launched the development of post-quantum cryptographic primitives and protocols. Among the various approaches, lattice-based cryptography (LBC) is considered one of the most promising candidates due to its strong security guarantees and inherent resistance to quantum attacks. Leveraging LBC, several researchers have proposed lattice-based public auditing (LBPA) schemes for cloud storage security based on lattice hardness assumptions. This paper provides a comprehensive survey of existing LBPA schemes for cloud storage, presenting a detailed taxonomy and analyzing their similarities, differences, and performance. Additionally, it highlights key challenges and outlines future research directions for designing efficient and secure public auditing schemes in the post-quantum era.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion” 对“基于信念测度加权融合的食用菌识别增强模型”的修正
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1002/cpe.70589

S. Yang, H. Wang, L. Huang, and X. Ma, “ Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion,” Concurrency and Computation: Practice and Experience 38, no. 1 (2026): e70520, https://doi.org/10.1002/cpe.70520.

In the first paragraph of Section 3.1.1 “Multicolor-Space Representation System,” the reference to Figure 4 was incorrect. The correct reference should be Figure 5.

In the first paragraph of Section 3.1.2 “Three-channel Probability-based Classifier,” the reference to Figure 5 was incorrect. The correct reference should be Figure 4.

In the last paragraph of Section 3.2.1 “Formulate the Basic Probability Assignment (BPA),” the text “The detailed calculation process of this case will be elaborated in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “The detailed calculation process of this case will be elaborated in Section 3.2.4 (A Case Study).”

In the last paragraph of Section 3.2.2.2 “Properties of the Belief Cosine Similarity Coefficient,” the text “This will be discussed in detail in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “This will be discussed in detail in Section 3.2.4 (A Case Study).”

In the first paragraph of Section 4.7.2 “Ablation study 2,” the reference to Figure 10 was incorrect. The correct reference should be Figure 11.

In the first paragraph of Section 4.7.4 “Ablation study 4,” the reference to Figure 11 was incorrect. The correct reference should be Figure 10.

We apologize for this error.

杨生,王红红,黄丽丽,马晓霞,“基于信念测度加权融合的食用菌识别增强模型”,《并行计算与实践》,第38期。1 (2026): e70520, https://doi.org/10.1002/cpe.70520.In第3.1.1节“多色空间表示系统”的第一段,对图4的引用是不正确的。正确的引用应该如图5所示。在第3.1.2节“基于概率的三通道分类器”的第一段中,对图5的引用是不正确的。正确的引用应该如图4所示。在第3.2.1节“制定基本概率分配(BPA)”的最后一段,文本“本案例的详细计算过程将在第3.2.3节(A case Study)中详细阐述”是不正确的。正确的表述应该是:“本案例的详细计算过程将在第3.2.4节(A case Study)中详细阐述。”在3.2.2.2节“信念余弦相似系数的性质”的最后一段中,文本“这将在3.2.3节(一个案例研究)中详细讨论”是不正确的。正确的表述应该是:“这将在第3.2.4节(案例研究)中详细讨论。”在第4.7.2节“消融研究2”的第一段中,对图10的引用是不正确的。正确的引用应该如图11所示。在第4.7.4节“消融研究4”的第一段中,对图11的引用是不正确的。正确的引用应该如图10所示。我们为这个错误道歉。
{"title":"Correction to “Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion”","authors":"","doi":"10.1002/cpe.70589","DOIUrl":"https://doi.org/10.1002/cpe.70589","url":null,"abstract":"<p>\u0000 <span>S. Yang</span>, <span>H. Wang</span>, <span>L. Huang</span>, and <span>X. Ma</span>, “ <span>Enhanced Model for Edible Mushroom Recognition Based on Belief Measure-Weighted Fusion</span>,” <i>Concurrency and Computation: Practice and Experience</i> <span>38</span>, no. <span>1</span> (<span>2026</span>): e70520, https://doi.org/10.1002/cpe.70520.</p><p>In the first paragraph of Section 3.1.1 “Multicolor-Space Representation System,” the reference to Figure 4 was incorrect. The correct reference should be Figure 5.</p><p>In the first paragraph of Section 3.1.2 “Three-channel Probability-based Classifier,” the reference to Figure 5 was incorrect. The correct reference should be Figure 4.</p><p>In the last paragraph of Section 3.2.1 “Formulate the Basic Probability Assignment (BPA),” the text “The detailed calculation process of this case will be elaborated in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “The detailed calculation process of this case will be elaborated in Section 3.2.4 (A Case Study).”</p><p>In the last paragraph of Section 3.2.2.2 “Properties of the Belief Cosine Similarity Coefficient,” the text “This will be discussed in detail in Section 3.2.3 (A Case Study).” was incorrect. The correct statement should be: “This will be discussed in detail in Section 3.2.4 (A Case Study).”</p><p>In the first paragraph of Section 4.7.2 “Ablation study 2,” the reference to Figure 10 was incorrect. The correct reference should be Figure 11.</p><p>In the first paragraph of Section 4.7.4 “Ablation study 4,” the reference to Figure 11 was incorrect. The correct reference should be Figure 10.</p><p>We apologize for this error.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Vertex Partitioning Algorithm for Large-Scale Uncertain Graphs 大规模不确定图的顶点划分算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1002/cpe.70580
Huanqing Cui, Anfu Chang, Jinbin Zhu, Ruixia Liu, Kekun Hu

With the exponential growth of graph-structured data, single-machine efficient analysis has become increasingly impractical, making high-performance distributed graph computing systems indispensable. The efficacy of these systems hinges critically on high-quality graph partitioning. The edges of many graphs stemmed from real applications are uncertain, but many existing graph partitioning algorithms are only for deterministic graphs without considering uncertainty. This paper presents a novel partitioning algorithm, PAUG (Partitioning Algorithm for Uncertain Graphs), tailored for uncertain graphs. First, it formalizes the partitioning problem as an optimization task to minimize the cut-edge ratio while balancing load. Second, it introduces probabilistic similarity to quantify vertex relationships under uncertainty. Finally, it details the PAUG algorithm which consists of initial partition phase and score-function-guided refinement strategy. Experimental results shows that PAUG achieves an average 23.2% reduction in cut-edge ratio and a 26.2% improvement in load balance over state-of-the-art algorithms.

随着图结构数据的指数级增长,单机高效分析变得越来越不现实,高性能的分布式图计算系统变得不可或缺。这些系统的有效性关键取决于高质量的图划分。实际应用中很多图的边缘都是不确定的,而现有的很多图划分算法只针对确定性图,没有考虑不确定性。本文提出了一种针对不确定图的分区算法pag (partitioning algorithm for不确定图)。首先,将分区问题形式化为在平衡负载的同时最小化切边比的优化任务。其次,引入概率相似度来量化不确定条件下的顶点关系。最后,详细介绍了由初始划分阶段和分数函数引导的改进策略组成的pag算法。实验结果表明,与最先进的算法相比,paaug的切割率平均降低23.2%,负载平衡平均提高26.2%。
{"title":"A Vertex Partitioning Algorithm for Large-Scale Uncertain Graphs","authors":"Huanqing Cui,&nbsp;Anfu Chang,&nbsp;Jinbin Zhu,&nbsp;Ruixia Liu,&nbsp;Kekun Hu","doi":"10.1002/cpe.70580","DOIUrl":"https://doi.org/10.1002/cpe.70580","url":null,"abstract":"<div>\u0000 \u0000 <p>With the exponential growth of graph-structured data, single-machine efficient analysis has become increasingly impractical, making high-performance distributed graph computing systems indispensable. The efficacy of these systems hinges critically on high-quality graph partitioning. The edges of many graphs stemmed from real applications are uncertain, but many existing graph partitioning algorithms are only for deterministic graphs without considering uncertainty. This paper presents a novel partitioning algorithm, PAUG (Partitioning Algorithm for Uncertain Graphs), tailored for uncertain graphs. First, it formalizes the partitioning problem as an optimization task to minimize the cut-edge ratio while balancing load. Second, it introduces probabilistic similarity to quantify vertex relationships under uncertainty. Finally, it details the PAUG algorithm which consists of initial partition phase and score-function-guided refinement strategy. Experimental results shows that PAUG achieves an average 23.2% reduction in cut-edge ratio and a 26.2% improvement in load balance over state-of-the-art algorithms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1