首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Numerical Simulation on Effects of EEC Positions and Heights on Pool Fire Temperature Distribution Characteristics in Engine Fan Cavity EEC位置和高度对发动机风扇腔池火温度分布特性影响的数值模拟
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-06 DOI: 10.1002/cpe.70603
Guanbing Cheng, Haobo Luo, Zichen Liu, Guoda Wang, Liang He

When leaked fuel encounters heat sources, pool fire scenarios occurring in confined fan cavity threaten engine operation safety. Various obstacles' structures, positions, and heights made pool fire propagation mechanisms complicated. Pool fire plume propagation characteristics were investigated in obstructed fan cavity. Both LES turbulent and mixed fraction diffused flame combustion models were established. Three EEC positions and heights were considered. Grid independences were validated. Effects of EEC positions and heights were examined on fire plume temperature distribution behaviors in fan cavity. The results show that fire plume temperature variations and their distributions agree well with experimental ones. The plume temperature undergoes an increase stage and a quasi-steady one with oscillations. EEC positions and heights have no evident effects on plume temperature variations at the bottom of the fan cavity. EEC with positions closer to the pool and higher heights hinders the fire propagation along the inner ring, but accelerates the fire plume floating across EEC and increases local temperature at the left side or the top of the fan cavity. EEC absence produces plume propagation at the right side of the fan cavity, makes its local temperature relatively low, and its oscillation more evident. EEC obstruction squeezes flame shapes and has unsymmetrical–symmetrical alternate variations. The fire plume develops rearwards.

泄漏燃油遇到热源时,密闭风扇腔内发生池火,威胁发动机运行安全。各种障碍物的结构、位置和高度使池火的传播机制变得复杂。研究了受阻扇空腔中池火羽的传播特性。建立了LES湍流和混合分数扩散火焰燃烧模型。考虑了欧共体的三个位置和高度。验证网格独立性。研究了EEC位置和高度对风机空腔内火羽温度分布行为的影响。结果表明,火羽温度变化及其分布与实验结果吻合较好。羽流温度经历了一个上升阶段和一个振荡的准稳定阶段。EEC位置和高度对扇形空腔底部羽流温度变化无明显影响。离池较近且高度较高的EEC阻碍了火灾沿内环的传播,但加速了火羽在EEC上的漂浮,增加了风扇腔左侧或顶部的局部温度。EEC缺失导致羽流在风扇空腔右侧传播,使其局部温度相对较低,振荡更加明显。EEC阻塞挤压火焰形状,并具有不对称对称的交替变化。火柱向后发展。
{"title":"Numerical Simulation on Effects of EEC Positions and Heights on Pool Fire Temperature Distribution Characteristics in Engine Fan Cavity","authors":"Guanbing Cheng,&nbsp;Haobo Luo,&nbsp;Zichen Liu,&nbsp;Guoda Wang,&nbsp;Liang He","doi":"10.1002/cpe.70603","DOIUrl":"https://doi.org/10.1002/cpe.70603","url":null,"abstract":"<div>\u0000 \u0000 <p>When leaked fuel encounters heat sources, pool fire scenarios occurring in confined fan cavity threaten engine operation safety. Various obstacles' structures, positions, and heights made pool fire propagation mechanisms complicated. Pool fire plume propagation characteristics were investigated in obstructed fan cavity. Both LES turbulent and mixed fraction diffused flame combustion models were established. Three EEC positions and heights were considered. Grid independences were validated. Effects of EEC positions and heights were examined on fire plume temperature distribution behaviors in fan cavity. The results show that fire plume temperature variations and their distributions agree well with experimental ones. The plume temperature undergoes an increase stage and a quasi-steady one with oscillations. EEC positions and heights have no evident effects on plume temperature variations at the bottom of the fan cavity. EEC with positions closer to the pool and higher heights hinders the fire propagation along the inner ring, but accelerates the fire plume floating across EEC and increases local temperature at the left side or the top of the fan cavity. EEC absence produces plume propagation at the right side of the fan cavity, makes its local temperature relatively low, and its oscillation more evident. EEC obstruction squeezes flame shapes and has unsymmetrical–symmetrical alternate variations. The fire plume develops rearwards.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 4","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction and Optimization of Magnetic Core Loss Driven by Multifactor Coupling 多因素耦合驱动磁芯损耗预测与优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-05 DOI: 10.1002/cpe.70598
Yanghua Gao, Yahui Chen, Zhenzhen Xu, Bin Ye, Hailiang Lu, Sen Liu

Accurate prediction of magnetic core loss is paramount for advancing power electronics but is severely hindered by the nonlinear coupling of multiple factors including frequency, flux density, excitation waveform, temperature, and material properties. To address this, we propose a comprehensive data-driven framework that integrates model refinement, coupling analysis, and multiobjective optimization. First, a temperature-corrected Steinmetz equation is developed, extending its validity and reducing the mean relative prediction error by more than half. Second, the individual and synergistic effects of these key variables are quantified via regression and the Artificial Hummingbird Algorithm (AHA), pinpointing the optimal combination for global loss minimization. Third, an ultra-accurate Gradient Boosting Decision Tree (GBDT) model is established as a fast surrogate, leveraging key features like peak flux density for subsequent optimization. Finally, a constrained multiobjective optimization is performed, balancing core loss minimization against magnetic energy transfer maximization. The derived optimal design achieves a 30% core loss reduction compared to conventional benchmarks without compromising energy throughput. This work provides a systematic, data-driven pathway for the design of high-performance magnetic components, offering significant performance gains over traditional approaches.

磁芯损耗的准确预测对于电力电子技术的发展至关重要,但由于频率、磁通密度、激励波形、温度和材料性能等多种因素的非线性耦合,影响了磁芯损耗的准确预测。为了解决这个问题,我们提出了一个综合的数据驱动框架,该框架集成了模型细化、耦合分析和多目标优化。首先,建立了一个经温度校正的Steinmetz方程,扩展了其有效性,并将平均相对预测误差降低了一半以上。其次,通过回归和人工蜂鸟算法(AHA)对这些关键变量的个体和协同效应进行量化,确定全局损失最小化的最佳组合。第三,建立了超精确的梯度增强决策树(GBDT)模型作为快速代理,利用峰值通量密度等关键特征进行后续优化。最后,在磁芯损耗最小化和磁能传递最大化之间进行了约束多目标优化。与传统基准相比,衍生的优化设计在不影响能量吞吐量的情况下实现了30%的芯损耗降低。这项工作为高性能磁性元件的设计提供了一个系统的、数据驱动的途径,比传统方法提供了显著的性能提升。
{"title":"Prediction and Optimization of Magnetic Core Loss Driven by Multifactor Coupling","authors":"Yanghua Gao,&nbsp;Yahui Chen,&nbsp;Zhenzhen Xu,&nbsp;Bin Ye,&nbsp;Hailiang Lu,&nbsp;Sen Liu","doi":"10.1002/cpe.70598","DOIUrl":"https://doi.org/10.1002/cpe.70598","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate prediction of magnetic core loss is paramount for advancing power electronics but is severely hindered by the nonlinear coupling of multiple factors including frequency, flux density, excitation waveform, temperature, and material properties. To address this, we propose a comprehensive data-driven framework that integrates model refinement, coupling analysis, and multiobjective optimization. First, a temperature-corrected Steinmetz equation is developed, extending its validity and reducing the mean relative prediction error by more than half. Second, the individual and synergistic effects of these key variables are quantified via regression and the Artificial Hummingbird Algorithm (AHA), pinpointing the optimal combination for global loss minimization. Third, an ultra-accurate Gradient Boosting Decision Tree (GBDT) model is established as a fast surrogate, leveraging key features like peak flux density for subsequent optimization. Finally, a constrained multiobjective optimization is performed, balancing core loss minimization against magnetic energy transfer maximization. The derived optimal design achieves a 30% core loss reduction compared to conventional benchmarks without compromising energy throughput. This work provides a systematic, data-driven pathway for the design of high-performance magnetic components, offering significant performance gains over traditional approaches.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 4","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMLG-Point: Unsupervised Pretraining Approach for Cattle Point Cloud Segmentation and Measurement MMLG-Point:牛点云分割和测量的无监督预训练方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70596
Zhi Weng, Yuzhe Bian, Zhiqiang Zheng, Wenwen Hao

Manual measurement of cattle body size presents challenges, such as inducing stress responses in animals and inefficiencies. For large livestock like cattle, measurement based on full point clouds involves extensive computations and interference between different cloud sections. To address this, we propose MMLG-Point, a novel deep learning model for cattle point cloud segmentation and body size measurement, which introduces a Multilevel Geometric Perception Encoder and a Transformer-based decoder architecture. The encoder integrates Kernel Point Convolution (KPConv) and Separable Structure-Aware Learning (SSAL) with residual multiscale fusion to capture local geometric structures of large livestock point clouds, while the decoder employs CrossNorm and SelfNorm (CNSN) modules to enhance generalization under limited labeled data. Furthermore, an unsupervised pretraining strategy based on masked point reconstruction is proposed, enabling the model to learn structural and semantic representations from unlabeled cattle point clouds. Experimental results demonstrate that MMLG-Point achieves outstanding segmentation accuracy with minimal supervision, obtaining an overall accuracy (OA) of 94.3% and a mean Intersection over Union (mIoU) of 89.4% on the Simmental cattle dataset using only 12 labeled samples. The model also exhibits strong cross-species generalization, achieving 92.3% OA and 86.7% mIoU on pig datasets. Based on segmentation results, an automatic cattle body measurement algorithm is developed, incorporating density analysis, curvature detection, and contour extraction to compute parameters such as withers height, hip height, body length, chest girth, and abdominal circumference, achieving a mean absolute percentage error (MAPE) below 6%. These results confirm that the proposed MMLG-Point framework provides an effective and generalizable approach for high-precision segmentation and measurement of large livestock point clouds.

人工测量牛的体型带来了挑战,例如引起动物的应激反应和效率低下。对于像牛这样的大型牲畜,基于全点云的测量涉及大量的计算和不同云段之间的干扰。为了解决这个问题,我们提出了MMLG-Point,这是一种新的深度学习模型,用于牛点云分割和身体尺寸测量,它引入了一个多级几何感知编码器和一个基于变压器的解码器架构。编码器将核点卷积(KPConv)和可分离结构感知学习(SSAL)结合残差多尺度融合技术捕获大型牲畜点云的局部几何结构,解码器采用CrossNorm和SelfNorm (CNSN)模块增强有限标记数据下的泛化能力。在此基础上,提出了一种基于掩蔽点重构的无监督预训练策略,使模型能够从未标记的牛点云中学习结构表征和语义表征。实验结果表明,MMLG-Point在最小监督下获得了出色的分割精度,在Simmental牛数据集上,仅使用12个标记样本就获得了94.3%的总体准确率(OA)和89.4%的平均交叉点(mIoU)。该模型还具有很强的跨物种泛化能力,在猪数据集上实现了92.3%的OA和86.7%的mIoU。基于分割结果,开发了一种牛体自动测量算法,结合密度分析、曲率检测和轮廓提取,计算牛肩高、臀高、体长、胸围、腹围等参数,实现了平均绝对百分比误差(MAPE)在6%以下。这些结果证实了所提出的MMLG-Point框架为大型牲畜点云的高精度分割和测量提供了一种有效的、可推广的方法。
{"title":"MMLG-Point: Unsupervised Pretraining Approach for Cattle Point Cloud Segmentation and Measurement","authors":"Zhi Weng,&nbsp;Yuzhe Bian,&nbsp;Zhiqiang Zheng,&nbsp;Wenwen Hao","doi":"10.1002/cpe.70596","DOIUrl":"https://doi.org/10.1002/cpe.70596","url":null,"abstract":"<div>\u0000 \u0000 <p>Manual measurement of cattle body size presents challenges, such as inducing stress responses in animals and inefficiencies. For large livestock like cattle, measurement based on full point clouds involves extensive computations and interference between different cloud sections. To address this, we propose MMLG-Point, a novel deep learning model for cattle point cloud segmentation and body size measurement, which introduces a Multilevel Geometric Perception Encoder and a Transformer-based decoder architecture. The encoder integrates Kernel Point Convolution (KPConv) and Separable Structure-Aware Learning (SSAL) with residual multiscale fusion to capture local geometric structures of large livestock point clouds, while the decoder employs CrossNorm and SelfNorm (CNSN) modules to enhance generalization under limited labeled data. Furthermore, an unsupervised pretraining strategy based on masked point reconstruction is proposed, enabling the model to learn structural and semantic representations from unlabeled cattle point clouds. Experimental results demonstrate that MMLG-Point achieves outstanding segmentation accuracy with minimal supervision, obtaining an overall accuracy (OA) of 94.3% and a mean Intersection over Union (mIoU) of 89.4% on the Simmental cattle dataset using only 12 labeled samples. The model also exhibits strong cross-species generalization, achieving 92.3% OA and 86.7% mIoU on pig datasets. Based on segmentation results, an automatic cattle body measurement algorithm is developed, incorporating density analysis, curvature detection, and contour extraction to compute parameters such as withers height, hip height, body length, chest girth, and abdominal circumference, achieving a mean absolute percentage error (MAPE) below 6%. These results confirm that the proposed MMLG-Point framework provides an effective and generalizable approach for high-precision segmentation and measurement of large livestock point clouds.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 4","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequence Recommendation for Mobile Application via Time Interval-Aware Attention and Contrastive Learning 基于时间间隔意识和对比学习的移动应用程序序列推荐
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-27 DOI: 10.1002/cpe.70585
Buqing Cao, Junyi Chen, Ziming Xie, Wenyu Zhao, Sheng Lin, Longxin Zhang

Mobile application recommendation has emerged as a pivotal domain within the realm of personalized recommendation systems. Traditional mobile application sequence recommendation approaches are predominantly dedicated to the pursuit of sophisticated sequence encoders to achieve more precise representations. However, existing sequence recommendation methods primarily consider the sequential order of historical App interactions, overlooking the time intervals between applications. This oversight hinders the model's capability to fully unearth the temporal correlations in user behavior, consequently limiting the accuracy and personalization of mobile application recommendations. Moreover, the interactions between users and mobile applications are typically sparse, which weakens the model's generalization capabilities. To address these issues, we propose a novel method for mobile application sequence recommendation, incorporating time interval-aware attention and contrastive learning (called Ti-CoRe). Specifically, this approach introduces a novel sequence augmentation strategy based on similarity replacement within a contrastive learning framework. By considering textual similarities between applications, this method selectively replaces applications that possess lower similarity scores to generate augmented sequences, increasing the diversity of the sample space and mitigating data sparsity. Furthermore, integrating a time interval-aware mechanism into the BERT4Rec model, the paper presents a new T-BERT encoder. It precisely assesses the influence of fluctuating time intervals on the prediction of the subsequent mobile application, thereby ensuring a more nuanced app representation. Experiments conducted on the 360APP real dataset demonstrate that Ti-CoRe consistently outperforms various baseline models in terms of NDCG and HR metrics.

移动应用程序推荐已经成为个性化推荐系统领域的一个关键领域。传统的移动应用程序序列推荐方法主要致力于追求复杂的序列编码器,以实现更精确的表示。然而,现有的顺序推荐方法主要考虑历史应用交互的顺序顺序,忽略了应用之间的时间间隔。这种疏忽阻碍了模型充分挖掘用户行为中的时间相关性的能力,从而限制了移动应用程序推荐的准确性和个性化。此外,用户与移动应用程序之间的交互通常是稀疏的,这削弱了模型的泛化能力。为了解决这些问题,我们提出了一种新的移动应用程序序列推荐方法,该方法结合了时间间隔感知注意和对比学习(称为Ti-CoRe)。具体来说,该方法在对比学习框架中引入了一种基于相似性替换的序列增强策略。该方法通过考虑应用程序之间的文本相似性,选择性地替换具有较低相似性分数的应用程序来生成增广序列,增加了样本空间的多样性,减轻了数据稀疏性。此外,将时间间隔感知机制集成到BERT4Rec模型中,提出了一种新的T-BERT编码器。它精确地评估波动时间间隔对后续移动应用程序预测的影响,从而确保更细致入微的应用程序表示。在360APP真实数据集上进行的实验表明,在NDCG和HR指标方面,Ti-CoRe始终优于各种基线模型。
{"title":"Sequence Recommendation for Mobile Application via Time Interval-Aware Attention and Contrastive Learning","authors":"Buqing Cao,&nbsp;Junyi Chen,&nbsp;Ziming Xie,&nbsp;Wenyu Zhao,&nbsp;Sheng Lin,&nbsp;Longxin Zhang","doi":"10.1002/cpe.70585","DOIUrl":"https://doi.org/10.1002/cpe.70585","url":null,"abstract":"<div>\u0000 \u0000 <p>Mobile application recommendation has emerged as a pivotal domain within the realm of personalized recommendation systems. Traditional mobile application sequence recommendation approaches are predominantly dedicated to the pursuit of sophisticated sequence encoders to achieve more precise representations. However, existing sequence recommendation methods primarily consider the sequential order of historical App interactions, overlooking the time intervals between applications. This oversight hinders the model's capability to fully unearth the temporal correlations in user behavior, consequently limiting the accuracy and personalization of mobile application recommendations. Moreover, the interactions between users and mobile applications are typically sparse, which weakens the model's generalization capabilities. To address these issues, we propose a novel method for mobile application sequence recommendation, incorporating time interval-aware attention and contrastive learning (called Ti-CoRe). Specifically, this approach introduces a novel sequence augmentation strategy based on similarity replacement within a contrastive learning framework. By considering textual similarities between applications, this method selectively replaces applications that possess lower similarity scores to generate augmented sequences, increasing the diversity of the sample space and mitigating data sparsity. Furthermore, integrating a time interval-aware mechanism into the BERT4Rec model, the paper presents a new T-BERT encoder. It precisely assesses the influence of fluctuating time intervals on the prediction of the subsequent mobile application, thereby ensuring a more nuanced app representation. Experiments conducted on the 360APP real dataset demonstrate that Ti-CoRe consistently outperforms various baseline models in terms of NDCG and HR metrics.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA-Accelerated Real-Time Tennis Serving Robot With DSP-Efficient Convolutional Neural Network 基于dsp高效卷积神经网络的fpga加速实时网球发球机器人
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-27 DOI: 10.1002/cpe.70579
Tengfei Li, Shenshen Gu, Yulong Ren

Artificial intelligence hardware accelerators are gaining increasing importance in domains such as computer vision and robotics. However, deploying Convolutional Neural Networks (CNNs) on embedded systems with constrained resources and memory continues to pose a major challenge. Motivated by the requirements of robotic vision, this paper presents a DSP-Efficient Packing Strategy (DEPS) accelerator architecture tailored for lightweight CNNs, improving both computational throughput and hardware efficiency in real-time robotic applications. Unlike previous FPGA designs that underutilize DSP blocks, the proposed DEPS enables the parallel execution of twelve 3-bit multiplications within a single DSP48E2 unit. A layer-wise pipelined mapping scheme is also proposed, which directly maps each CNN layer onto hardware without intermediate buffering, ensuring continuous computation and minimizing latency. The proposed accelerator is incorporated into an intelligent tennis serving robot, serving as the real-time vision module for object detection. Experimental results from VGG7-tiny and UltraNet demonstrate throughputs of 299.4 GOPS and 340.0 GOPS, respectively, alongside power efficiencies of 80.1 GOPS/W and 89.2 GOPS/W. The robotic system deployment confirms that superior DSP utilization is achieved, enabling rapid, energy-efficient, and reliable perception. This work highlights the potential of the proposed design for application in resource-constrained edge platforms and practical robotics.

人工智能硬件加速器在计算机视觉和机器人等领域越来越重要。然而,在资源和内存受限的嵌入式系统上部署卷积神经网络(cnn)仍然是一个重大挑战。从机器人视觉的需求出发,提出了一种针对轻量级cnn的DSP-Efficient Packing Strategy (DEPS)加速器架构,提高了实时机器人应用中的计算吞吐量和硬件效率。与以前的FPGA设计不充分利用DSP块不同,提出的DEPS可以在单个DSP48E2单元内并行执行12个3位乘法。提出了一种分层的流水线映射方案,将CNN的每一层直接映射到硬件上,不需要中间缓冲,保证了连续计算,最小化了延迟。该加速器被集成到智能网球发球机器人中,作为物体检测的实时视觉模块。VGG7-tiny和UltraNet的实验结果表明,吞吐量分别为299.4 GOPS和340.0 GOPS,功率效率分别为80.1 GOPS/W和89.2 GOPS/W。机器人系统的部署证实了卓越的DSP利用率,实现了快速、节能和可靠的感知。这项工作强调了所提出的设计在资源受限边缘平台和实际机器人应用中的潜力。
{"title":"FPGA-Accelerated Real-Time Tennis Serving Robot With DSP-Efficient Convolutional Neural Network","authors":"Tengfei Li,&nbsp;Shenshen Gu,&nbsp;Yulong Ren","doi":"10.1002/cpe.70579","DOIUrl":"https://doi.org/10.1002/cpe.70579","url":null,"abstract":"<div>\u0000 \u0000 <p>Artificial intelligence hardware accelerators are gaining increasing importance in domains such as computer vision and robotics. However, deploying Convolutional Neural Networks (CNNs) on embedded systems with constrained resources and memory continues to pose a major challenge. Motivated by the requirements of robotic vision, this paper presents a DSP-Efficient Packing Strategy (DEPS) accelerator architecture tailored for lightweight CNNs, improving both computational throughput and hardware efficiency in real-time robotic applications. Unlike previous FPGA designs that underutilize DSP blocks, the proposed DEPS enables the parallel execution of twelve 3-bit multiplications within a single DSP48E2 unit. A layer-wise pipelined mapping scheme is also proposed, which directly maps each CNN layer onto hardware without intermediate buffering, ensuring continuous computation and minimizing latency. The proposed accelerator is incorporated into an intelligent tennis serving robot, serving as the real-time vision module for object detection. Experimental results from VGG7-tiny and UltraNet demonstrate throughputs of 299.4 GOPS and 340.0 GOPS, respectively, alongside power efficiencies of 80.1 GOPS/W and 89.2 GOPS/W. The robotic system deployment confirms that superior DSP utilization is achieved, enabling rapid, energy-efficient, and reliable perception. This work highlights the potential of the proposed design for application in resource-constrained edge platforms and practical robotics.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-Based Data Deduplication: Techniques, Challenges, and Future Directions 基于机器学习的重复数据删除:技术、挑战和未来方向
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-25 DOI: 10.1002/cpe.70574
Ravneet Kaur, Harcharan Jit Singh, Inderveer Chana

Data deduplication plays an important role in modern data management as it reduces storage costs and ensures consistency by eliminating redundant records. The traditional data deduplication methods are effective for exact matches but struggle with adaptability and detecting near-exact duplicate records in unstructured or complex data. Machine learning (ML) addresses these limitations by using pattern recognition, feature learning, and statistical modeling to identify subtle similarities between records. This review classifies ML-based deduplication techniques into supervised, unsupervised, semi-supervised, and deep learning methodologies. It also discusses key challenges, including class imbalance, model interpretability, and computational overhead. The paper also explores recent developments in federated learning, real-time deduplication, and multimodal techniques to highlight current trends in these areas. Finally, the paper identifies key open issues and proposes a unified perspective for scalable, real-time deduplication systems that can accommodate diverse data types, structures, and system requirements.

重复数据删除在现代数据管理中发挥着重要作用,它通过消除冗余记录来降低存储成本并确保一致性。传统的重复数据删除方法对精确匹配是有效的,但在非结构化或复杂数据中难以适应和检测接近精确的重复记录。机器学习(ML)通过使用模式识别、特征学习和统计建模来识别记录之间微妙的相似性,从而解决了这些限制。本文将基于ml的重复数据删除技术分为监督式、无监督式、半监督式和深度学习方法。它还讨论了关键挑战,包括类不平衡、模型可解释性和计算开销。本文还探讨了联邦学习、实时重复数据删除和多模态技术的最新发展,以突出这些领域的当前趋势。最后,本文确定了关键的开放问题,并提出了一个统一的视角,可扩展的,实时的重复数据删除系统,可以适应不同的数据类型,结构和系统需求。
{"title":"Machine Learning-Based Data Deduplication: Techniques, Challenges, and Future Directions","authors":"Ravneet Kaur,&nbsp;Harcharan Jit Singh,&nbsp;Inderveer Chana","doi":"10.1002/cpe.70574","DOIUrl":"10.1002/cpe.70574","url":null,"abstract":"<div>\u0000 \u0000 <p>Data deduplication plays an important role in modern data management as it reduces storage costs and ensures consistency by eliminating redundant records. The traditional data deduplication methods are effective for exact matches but struggle with adaptability and detecting near-exact duplicate records in unstructured or complex data. Machine learning (ML) addresses these limitations by using pattern recognition, feature learning, and statistical modeling to identify subtle similarities between records. This review classifies ML-based deduplication techniques into supervised, unsupervised, semi-supervised, and deep learning methodologies. It also discusses key challenges, including class imbalance, model interpretability, and computational overhead. The paper also explores recent developments in federated learning, real-time deduplication, and multimodal techniques to highlight current trends in these areas. Finally, the paper identifies key open issues and proposes a unified perspective for scalable, real-time deduplication systems that can accommodate diverse data types, structures, and system requirements.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Ensemble Machine Learning Method for Electric Vehicles Energy Consumption Rate Estimation 一种可解释的集成机器学习方法用于电动汽车能耗率估算
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70571
Mohammed Zaid Ghawy, Shuyan Chen, Sajan Shaikh, Aamir Hussain, Rajasekhar Balasubramanian, Yongfeng Ma

The rapid adoption of electric vehicles (EVs) highlights the need for intelligent systems to improve energy efficiency and optimize driving range. Since energy consumption and driving range modeling are closely related, understanding the energy consumption (EC) of EVs can provide essential insights to drivers and reduce “range anxiety.” Previous studies have relied on traditional analytical and statistical methods, which lack the representativeness of influential factors and the interpretability of the model applied in EC modeling. To address this issue, we propose an explainable ensemble machine learning model to predict EC of EVs, considering the most important features and the factors that exhibit greater influence on EC. The Spritmonitor public real-world dataset is used for this study. First, data preprocessing is conducted before feeding data into the ensemble method. Second, the Energy Consumption Rate (ECR) was predicted using Gradient Boosting Regression Trees (GBRT). The proposed predictive framework demonstrates superior prediction accuracy compared to baseline models. GBRT achieved the highest R 2 (1 and 0.99 for training and testing, respectively) and the lowest MAE (0.08) and RMSE (0.16) compared to other models, including XGBoost, LightGBM, and CatBoost. Finally, SHAP (Shapley Additive exPlanations) analysis was applied to explain the proposed model and identify the most influential dynamics factors, including driving range, capacity, speed, state of charge (SOC), ambient temperature, road type, driving style, air conditioning, and heating usage. The results suggest that the proposed framework can effectively enhance the prediction of the EC of EVs and facilitates the analyze driving factors, thereby supporting intelligent trip planning, adaptive energy-aware management in transportation systems and provide insightful feedback to drivers.

电动汽车的快速普及凸显了对智能系统的需求,以提高能源效率和优化续驶里程。由于能源消耗和续驶里程模型密切相关,因此了解电动汽车的能源消耗(EC)可以为驾驶员提供必要的见解,并减少“续驶里程焦虑”。以往的研究主要依靠传统的分析和统计方法,缺乏影响因素的代表性和模型的可解释性。为了解决这个问题,我们提出了一个可解释的集成机器学习模型来预测电动汽车的EC,考虑了最重要的特征和对EC影响较大的因素。本研究使用Spritmonitor公开的真实世界数据集。首先,在将数据输入集成方法之前进行数据预处理。其次,利用梯度增强回归树(GBRT)对能源消耗率(ECR)进行预测。与基线模型相比,所提出的预测框架具有更高的预测精度。与其他模型(包括XGBoost、LightGBM和CatBoost)相比,GBRT的r2最高(训练和测试分别为1和0.99),MAE最低(0.08),RMSE最低(0.16)。最后,采用Shapley加性解释(Shapley Additive exPlanations)分析对模型进行了解释,并确定了影响最大的动力学因素,包括续驶里程、容量、速度、充电状态(SOC)、环境温度、道路类型、驾驶方式、空调和供暖使用情况。结果表明,该框架能够有效增强电动汽车EC的预测能力,促进驱动因素分析,从而支持交通系统的智能出行规划、自适应能源意识管理,并为驾驶员提供有洞察力的反馈。
{"title":"An Explainable Ensemble Machine Learning Method for Electric Vehicles Energy Consumption Rate Estimation","authors":"Mohammed Zaid Ghawy,&nbsp;Shuyan Chen,&nbsp;Sajan Shaikh,&nbsp;Aamir Hussain,&nbsp;Rajasekhar Balasubramanian,&nbsp;Yongfeng Ma","doi":"10.1002/cpe.70571","DOIUrl":"10.1002/cpe.70571","url":null,"abstract":"<div>\u0000 \u0000 <p>The rapid adoption of electric vehicles (EVs) highlights the need for intelligent systems to improve energy efficiency and optimize driving range. Since energy consumption and driving range modeling are closely related, understanding the energy consumption (EC) of EVs can provide essential insights to drivers and reduce “range anxiety.” Previous studies have relied on traditional analytical and statistical methods, which lack the representativeness of influential factors and the interpretability of the model applied in EC modeling. To address this issue, we propose an explainable ensemble machine learning model to predict EC of EVs, considering the most important features and the factors that exhibit greater influence on EC. The Spritmonitor public real-world dataset is used for this study. First, data preprocessing is conducted before feeding data into the ensemble method. Second, the Energy Consumption Rate (ECR) was predicted using Gradient Boosting Regression Trees (GBRT). The proposed predictive framework demonstrates superior prediction accuracy compared to baseline models. GBRT achieved the highest <i>R</i>\u0000 <sup>2</sup> (1 and 0.99 for training and testing, respectively) and the lowest MAE (0.08) and RMSE (0.16) compared to other models, including XGBoost, LightGBM, and CatBoost. Finally, SHAP (Shapley Additive exPlanations) analysis was applied to explain the proposed model and identify the most influential dynamics factors, including driving range, capacity, speed, state of charge (SOC), ambient temperature, road type, driving style, air conditioning, and heating usage. The results suggest that the proposed framework can effectively enhance the prediction of the EC of EVs and facilitates the analyze driving factors, thereby supporting intelligent trip planning, adaptive energy-aware management in transportation systems and provide insightful feedback to drivers.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Carbon Emission Prediction for Gas Power Plants Based on Deep Learning Under Small-Sample Conditions 小样本条件下基于深度学习的燃气电厂碳排放预测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70591
Xiaozhou Fan, Zhe Wang, Hanwen Bi, Ruiyang Wang

Accurate forecasting of carbon emissions from power generation enterprises is essential under China's dual-control policy. Although deep learning methods show strong potential, studies on their optimal configuration remain limited. This paper proposed a hybrid deep learning framework integrating a convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and an attention mechanism for carbon emission prediction in natural gas power plants. The present study utilized two distinct optimization methodologies: a structured design strategy encompassing light, medium, and heavy configurations, while the other employed Bayesian optimization for hyperparameter tuning. The models were evaluated using 5-fold cross-validation on 619 operational samples from two 487.1-MW condensing units in a power plant in Hainan, China. The medium configuration achieved the best balance between accuracy, efficiency, and stability, with R2 = 0.9833, RMSE = 0.0342, and MAE = 0.0242. Under small-sample conditions, the structured design approach outperformed Bayesian optimization by 0.16% in accuracy while requiring only 7.42% of the training time. The proposed framework provides an efficient and interpretable reference for selecting deep learning architectures in small-sample industrial regression tasks and supports intelligent, low-carbon power generation applications.

在中国的双重管制政策下,准确预测发电企业的碳排放至关重要。尽管深度学习方法显示出强大的潜力,但对其最优配置的研究仍然有限。本文提出了一种融合卷积神经网络(CNN)、双向长短期记忆(BiLSTM)和注意机制的混合深度学习框架,用于天然气电厂碳排放预测。本研究使用了两种不同的优化方法:一种包含轻、中、重配置的结构化设计策略,而另一种采用贝叶斯优化进行超参数调优。对中国海南某电厂两台487.1 mw冷凝机组的619个运行样本进行了5倍交叉验证,对模型进行了评估。介质配置在准确性、效率和稳定性之间达到了最佳平衡,R2 = 0.9833, RMSE = 0.0342, MAE = 0.0242。在小样本条件下,结构化设计方法的准确率比贝叶斯优化方法高0.16%,而训练时间仅为贝叶斯优化方法的7.42%。该框架为在小样本工业回归任务中选择深度学习架构提供了有效且可解释的参考,并支持智能、低碳发电应用。
{"title":"Carbon Emission Prediction for Gas Power Plants Based on Deep Learning Under Small-Sample Conditions","authors":"Xiaozhou Fan,&nbsp;Zhe Wang,&nbsp;Hanwen Bi,&nbsp;Ruiyang Wang","doi":"10.1002/cpe.70591","DOIUrl":"10.1002/cpe.70591","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate forecasting of carbon emissions from power generation enterprises is essential under China's dual-control policy. Although deep learning methods show strong potential, studies on their optimal configuration remain limited. This paper proposed a hybrid deep learning framework integrating a convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and an attention mechanism for carbon emission prediction in natural gas power plants. The present study utilized two distinct optimization methodologies: a structured design strategy encompassing light, medium, and heavy configurations, while the other employed Bayesian optimization for hyperparameter tuning. The models were evaluated using 5-fold cross-validation on 619 operational samples from two 487.1-MW condensing units in a power plant in Hainan, China. The medium configuration achieved the best balance between accuracy, efficiency, and stability, with R<sup>2</sup> = 0.9833, RMSE = 0.0342, and MAE = 0.0242. Under small-sample conditions, the structured design approach outperformed Bayesian optimization by 0.16% in accuracy while requiring only 7.42% of the training time. The proposed framework provides an efficient and interpretable reference for selecting deep learning architectures in small-sample industrial regression tasks and supports intelligent, low-carbon power generation applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Real-Time Automated Library Inventory System Based on Edge-Cloud Collaboration 基于边缘云协作的实时自动化图书馆库存系统
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70573
Lu Zhu, Zhihui Gu, Kai Zhu, Xingcheng Xu, Jingzhi Wang, Yuanyuan Liu

Library inventory is vital for collection management and reader satisfaction. Conventional manual methods cannot support real-time updates, while existing automated solutions relying on centralized cloud computing suffer from bandwidth and latency limitations. To address these issues, we propose an edge-cloud collaborative real-time book inventory system. Spine detection and text recognition are executed on embedded edge devices, while the cloud handles rapid data retrieval to balance timeliness and accuracy. We design lightweight models for edge deployment, including the Library You Only Look Once (Lib-YOLO) detector with a StarNet backbone, shared convolutional head, and dual-scale hierarchical detection, supporting rotated objects for robust spine extraction. The optimized paddle practical optical character recognition (PP-OCR) pipeline removes text rectification and integrates a filtering algorithm to reduce redundant computation and improve efficiency. Deployed on an NVIDIA Jetson Nano, the system achieves 73 ms spine detection latency, 191 ms text recognition latency, and 97.1% overall accuracy under simulated library conditions. The Lib-YOLO model contains only 1.39 M parameters with 99% mean average precision (mAP), demonstrating the feasibility of precise, real-time inventorying in resource-constrained embedded environments.

图书馆库存对馆藏管理和读者满意度至关重要。传统的手动方法无法支持实时更新,而依赖集中式云计算的现有自动化解决方案则受到带宽和延迟的限制。为了解决这些问题,我们提出了一个边缘云协作实时图书库存系统。脊柱检测和文本识别在嵌入式边缘设备上执行,而云处理快速数据检索以平衡及时性和准确性。我们为边缘部署设计了轻量级模型,包括带有StarNet主干的Library You Only Look Once (Lib-YOLO)检测器,共享卷积头和双尺度分层检测,支持旋转对象进行鲁棒脊柱提取。优化后的桨形实用光学字符识别(PP-OCR)管道去除了文本纠错,并集成了滤波算法,减少了冗余计算,提高了效率。该系统部署在NVIDIA Jetson Nano上,在模拟图书馆条件下实现了73毫秒的脊柱检测延迟,191毫秒的文本识别延迟和97.1%的总体准确率。Lib-YOLO模型仅包含1.39 M个参数,平均平均精度(mAP)为99%,证明了在资源受限的嵌入式环境中实现精确、实时库存的可行性。
{"title":"A Real-Time Automated Library Inventory System Based on Edge-Cloud Collaboration","authors":"Lu Zhu,&nbsp;Zhihui Gu,&nbsp;Kai Zhu,&nbsp;Xingcheng Xu,&nbsp;Jingzhi Wang,&nbsp;Yuanyuan Liu","doi":"10.1002/cpe.70573","DOIUrl":"10.1002/cpe.70573","url":null,"abstract":"<div>\u0000 \u0000 <p>Library inventory is vital for collection management and reader satisfaction. Conventional manual methods cannot support real-time updates, while existing automated solutions relying on centralized cloud computing suffer from bandwidth and latency limitations. To address these issues, we propose an edge-cloud collaborative real-time book inventory system. Spine detection and text recognition are executed on embedded edge devices, while the cloud handles rapid data retrieval to balance timeliness and accuracy. We design lightweight models for edge deployment, including the Library You Only Look Once (Lib-YOLO) detector with a StarNet backbone, shared convolutional head, and dual-scale hierarchical detection, supporting rotated objects for robust spine extraction. The optimized paddle practical optical character recognition (PP-OCR) pipeline removes text rectification and integrates a filtering algorithm to reduce redundant computation and improve efficiency. Deployed on an NVIDIA Jetson Nano, the system achieves 73 ms spine detection latency, 191 ms text recognition latency, and 97.1% overall accuracy under simulated library conditions. The Lib-YOLO model contains only 1.39 M parameters with 99% mean average precision (mAP), demonstrating the feasibility of precise, real-time inventorying in resource-constrained embedded environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SecureChain: A Blockchain-Based Secure Model for Sharing Privacy-Preserved Data Using Local Differential Privacy SecureChain:一种基于区块链的安全模型,用于使用本地差分隐私共享隐私保护数据
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-24 DOI: 10.1002/cpe.70473
Altaf Hussain, Laraib Javed, Muhammad Inam Ul Haq, Razaullah Khan, Wajahat Akbar, Razaz Waheeb Attar, Ahmed Alhazmi, Amal Hassan Alhazmi, Tariq Hussain

Privacy-Preserving Data Sharing (PPDS) masks the individual's collected data (e.g., medical healthcare data) before being disseminated by organizations for analysis and research. Patient data contains sensitive values that must be dealt with while ensuring certain privacy conditions are met. This minimizes the risk of re-identification of an individual record from the group of privacy-preserved data. However, with the advancement in technology (i.e., Big Data, the Internet of Things (IoT), and Blockchain), the existing classical privacy-preserving techniques are becoming obsolete. In this paper, we propose a blockchain-based secure data sharing technique named “SecureChain”, which preserves the privacy of an individual record using local differential privacy (LDP). The three distinguished features of the proposed approach are lower latency, higher throughput, and improved privacy. The proposed model outperforms the benchmarks in terms of both latency and throughput. In terms of precision, the proposed method improves the accuracy to 88.53% compared to its counterparts, which achieved 49% and 85% accuracy. The results of the experiment verify that the proposed approach outperforms its counterparts.

保护隐私的数据共享(PPDS)在组织传播用于分析和研究之前掩盖个人收集的数据(例如医疗保健数据)。患者数据包含必须处理的敏感值,同时确保满足某些隐私条件。这最大限度地减少了从一组隐私保护数据中重新识别单个记录的风险。然而,随着技术的进步(即大数据、物联网(IoT)和区块链),现有的经典隐私保护技术已经过时。在本文中,我们提出了一种名为“SecureChain”的基于区块链的安全数据共享技术,该技术使用本地差分隐私(LDP)来保护单个记录的隐私。该方法的三个显著特征是低延迟、高吞吐量和改进的隐私性。所建议的模型在延迟和吞吐量方面都优于基准测试。在精度方面,与同类方法相比,本文方法的准确率达到了49%和85%,提高了88.53%。实验结果验证了该方法的有效性。
{"title":"SecureChain: A Blockchain-Based Secure Model for Sharing Privacy-Preserved Data Using Local Differential Privacy","authors":"Altaf Hussain,&nbsp;Laraib Javed,&nbsp;Muhammad Inam Ul Haq,&nbsp;Razaullah Khan,&nbsp;Wajahat Akbar,&nbsp;Razaz Waheeb Attar,&nbsp;Ahmed Alhazmi,&nbsp;Amal Hassan Alhazmi,&nbsp;Tariq Hussain","doi":"10.1002/cpe.70473","DOIUrl":"10.1002/cpe.70473","url":null,"abstract":"<div>\u0000 \u0000 <p>Privacy-Preserving Data Sharing (PPDS) masks the individual's collected data (e.g., medical healthcare data) before being disseminated by organizations for analysis and research. Patient data contains sensitive values that must be dealt with while ensuring certain privacy conditions are met. This minimizes the risk of re-identification of an individual record from the group of privacy-preserved data. However, with the advancement in technology (i.e., Big Data, the Internet of Things (IoT), and Blockchain), the existing classical privacy-preserving techniques are becoming obsolete. In this paper, we propose a blockchain-based secure data sharing technique named “SecureChain”, which preserves the privacy of an individual record using local differential privacy (LDP). The three distinguished features of the proposed approach are lower latency, higher throughput, and improved privacy. The proposed model outperforms the benchmarks in terms of both latency and throughput. In terms of precision, the proposed method improves the accuracy to 88.53% compared to its counterparts, which achieved 49% and 85% accuracy. The results of the experiment verify that the proposed approach outperforms its counterparts.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1