首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
MMLG-Point: Unsupervised Pretraining Approach for Cattle Point Cloud Segmentation and Measurement MMLG-Point:牛点云分割和测量的无监督预训练方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70596
Zhi Weng, Yuzhe Bian, Zhiqiang Zheng, Wenwen Hao

Manual measurement of cattle body size presents challenges, such as inducing stress responses in animals and inefficiencies. For large livestock like cattle, measurement based on full point clouds involves extensive computations and interference between different cloud sections. To address this, we propose MMLG-Point, a novel deep learning model for cattle point cloud segmentation and body size measurement, which introduces a Multilevel Geometric Perception Encoder and a Transformer-based decoder architecture. The encoder integrates Kernel Point Convolution (KPConv) and Separable Structure-Aware Learning (SSAL) with residual multiscale fusion to capture local geometric structures of large livestock point clouds, while the decoder employs CrossNorm and SelfNorm (CNSN) modules to enhance generalization under limited labeled data. Furthermore, an unsupervised pretraining strategy based on masked point reconstruction is proposed, enabling the model to learn structural and semantic representations from unlabeled cattle point clouds. Experimental results demonstrate that MMLG-Point achieves outstanding segmentation accuracy with minimal supervision, obtaining an overall accuracy (OA) of 94.3% and a mean Intersection over Union (mIoU) of 89.4% on the Simmental cattle dataset using only 12 labeled samples. The model also exhibits strong cross-species generalization, achieving 92.3% OA and 86.7% mIoU on pig datasets. Based on segmentation results, an automatic cattle body measurement algorithm is developed, incorporating density analysis, curvature detection, and contour extraction to compute parameters such as withers height, hip height, body length, chest girth, and abdominal circumference, achieving a mean absolute percentage error (MAPE) below 6%. These results confirm that the proposed MMLG-Point framework provides an effective and generalizable approach for high-precision segmentation and measurement of large livestock point clouds.

人工测量牛的体型带来了挑战,例如引起动物的应激反应和效率低下。对于像牛这样的大型牲畜,基于全点云的测量涉及大量的计算和不同云段之间的干扰。为了解决这个问题,我们提出了MMLG-Point,这是一种新的深度学习模型,用于牛点云分割和身体尺寸测量,它引入了一个多级几何感知编码器和一个基于变压器的解码器架构。编码器将核点卷积(KPConv)和可分离结构感知学习(SSAL)结合残差多尺度融合技术捕获大型牲畜点云的局部几何结构,解码器采用CrossNorm和SelfNorm (CNSN)模块增强有限标记数据下的泛化能力。在此基础上,提出了一种基于掩蔽点重构的无监督预训练策略,使模型能够从未标记的牛点云中学习结构表征和语义表征。实验结果表明,MMLG-Point在最小监督下获得了出色的分割精度,在Simmental牛数据集上,仅使用12个标记样本就获得了94.3%的总体准确率(OA)和89.4%的平均交叉点(mIoU)。该模型还具有很强的跨物种泛化能力,在猪数据集上实现了92.3%的OA和86.7%的mIoU。基于分割结果,开发了一种牛体自动测量算法,结合密度分析、曲率检测和轮廓提取,计算牛肩高、臀高、体长、胸围、腹围等参数,实现了平均绝对百分比误差(MAPE)在6%以下。这些结果证实了所提出的MMLG-Point框架为大型牲畜点云的高精度分割和测量提供了一种有效的、可推广的方法。
{"title":"MMLG-Point: Unsupervised Pretraining Approach for Cattle Point Cloud Segmentation and Measurement","authors":"Zhi Weng,&nbsp;Yuzhe Bian,&nbsp;Zhiqiang Zheng,&nbsp;Wenwen Hao","doi":"10.1002/cpe.70596","DOIUrl":"https://doi.org/10.1002/cpe.70596","url":null,"abstract":"<div>\u0000 \u0000 <p>Manual measurement of cattle body size presents challenges, such as inducing stress responses in animals and inefficiencies. For large livestock like cattle, measurement based on full point clouds involves extensive computations and interference between different cloud sections. To address this, we propose MMLG-Point, a novel deep learning model for cattle point cloud segmentation and body size measurement, which introduces a Multilevel Geometric Perception Encoder and a Transformer-based decoder architecture. The encoder integrates Kernel Point Convolution (KPConv) and Separable Structure-Aware Learning (SSAL) with residual multiscale fusion to capture local geometric structures of large livestock point clouds, while the decoder employs CrossNorm and SelfNorm (CNSN) modules to enhance generalization under limited labeled data. Furthermore, an unsupervised pretraining strategy based on masked point reconstruction is proposed, enabling the model to learn structural and semantic representations from unlabeled cattle point clouds. Experimental results demonstrate that MMLG-Point achieves outstanding segmentation accuracy with minimal supervision, obtaining an overall accuracy (OA) of 94.3% and a mean Intersection over Union (mIoU) of 89.4% on the Simmental cattle dataset using only 12 labeled samples. The model also exhibits strong cross-species generalization, achieving 92.3% OA and 86.7% mIoU on pig datasets. Based on segmentation results, an automatic cattle body measurement algorithm is developed, incorporating density analysis, curvature detection, and contour extraction to compute parameters such as withers height, hip height, body length, chest girth, and abdominal circumference, achieving a mean absolute percentage error (MAPE) below 6%. These results confirm that the proposed MMLG-Point framework provides an effective and generalizable approach for high-precision segmentation and measurement of large livestock point clouds.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 4","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Triple-Color Watermarking Using Elliptical Monogenic Wavelet Transform and Singular Value Decomposition 基于椭圆单基因小波变换和奇异值分解的鲁棒三色水印
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70588
Chenxuan Wang, Lili Chen, Bin Gao, Yutong Li, Shutian Liu, Zhengjun Liu

This paper presents a novel triple-color watermarking algorithm that leverages the elliptical monogenic wavelet transform (EMWT) and singular value decomposition (SVD) for robust and secure watermark embedding. The proposed method first encrypts the host image using a random matrix. Subsequently, both the host and watermark images undergo preprocessing techniques, including EMWT, discrete wavelet transform (DWT), discrete cosine transform (DCT), and SVD, to obtain the diagonal matrices required for embedding. For enhanced security, the watermark images are encrypted using the Arnold method with phase and linear representation embedding, where the second key is derived from the host image via Fourier transform. The watermarked image is then generated by applying the corresponding inverse transforms. Experimental results demonstrate the algorithm's excellent embedding and extraction performance on color images and icons with characters. Notably, the proposed method exhibits strong robustness against various attacks, including non-geometric attacks (e.g., noise, filtering), geometric attacks (e.g., rotation, cropping), and mixed attacks, outperforming existing color watermarking methods in terms of anti-attack ability and robustness.

提出了一种利用椭圆单基因小波变换(EMWT)和奇异值分解(SVD)实现鲁棒安全水印嵌入的三色水印算法。该方法首先使用随机矩阵对主机映像进行加密。随后,对主图像和水印图像进行EMWT、离散小波变换(DWT)、离散余弦变换(DCT)和奇异值分解(SVD)预处理,得到嵌入所需的对角矩阵。为了增强安全性,水印图像采用相位和线性表示嵌入的Arnold方法进行加密,其中第二个密钥通过傅里叶变换从主图像中导出。然后通过应用相应的逆变换生成水印图像。实验结果表明,该算法对彩色图像和带有字符的图标具有良好的嵌入和提取性能。值得注意的是,该方法对各种攻击具有很强的鲁棒性,包括非几何攻击(如噪声、滤波)、几何攻击(如旋转、裁剪)和混合攻击,在抗攻击能力和鲁棒性方面优于现有的彩色水印方法。
{"title":"Robust Triple-Color Watermarking Using Elliptical Monogenic Wavelet Transform and Singular Value Decomposition","authors":"Chenxuan Wang,&nbsp;Lili Chen,&nbsp;Bin Gao,&nbsp;Yutong Li,&nbsp;Shutian Liu,&nbsp;Zhengjun Liu","doi":"10.1002/cpe.70588","DOIUrl":"10.1002/cpe.70588","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper presents a novel triple-color watermarking algorithm that leverages the elliptical monogenic wavelet transform (EMWT) and singular value decomposition (SVD) for robust and secure watermark embedding. The proposed method first encrypts the host image using a random matrix. Subsequently, both the host and watermark images undergo preprocessing techniques, including EMWT, discrete wavelet transform (DWT), discrete cosine transform (DCT), and SVD, to obtain the diagonal matrices required for embedding. For enhanced security, the watermark images are encrypted using the Arnold method with phase and linear representation embedding, where the second key is derived from the host image via Fourier transform. The watermarked image is then generated by applying the corresponding inverse transforms. Experimental results demonstrate the algorithm's excellent embedding and extraction performance on color images and icons with characters. Notably, the proposed method exhibits strong robustness against various attacks, including non-geometric attacks (e.g., noise, filtering), geometric attacks (e.g., rotation, cropping), and mixed attacks, outperforming existing color watermarking methods in terms of anti-attack ability and robustness.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146154588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SACFFNet: Shuffle Attention Convolutional Forward Fractional Network Based Attack Detection in Cloud Computing 云计算中基于洗牌注意力卷积前向分数网络的攻击检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70504
Aditya Kumar Shukla, Ashish Sharma, Sandeep Singh Sengar

Cloud computing is the massive evolution in Information Technology (IT) that provides virtualized and scalable resources to an end-user with minimum maintenance and cost. However, this environment is vulnerable to various attacks. These attacks impose heavy damages and affect the performance of the cloud. Therefore, timely detection of attacks in cloud computing is crucial. Hence, an efficient detection model is required for identifying various attacks in cloud computing. In this research, Shuffle Attention Convolutional Forward Fractional Network (SACFFNet) is presented to detect attacks in cloud computing. Initially, the cloud is simulated and next, a recorded log file is obtained from certain datasets. Thereafter, feature scaling is accomplished employing the minimum-maximum method. Then, features are selected using chord distance and Support Vector Machine Recursive Feature Elimination (SVM-RFE). Afterwards, data augmentation is done utilizing Synthetic Minority Oversampling Technique (SMOTE). Lastly, attacks are detected employing the newly designed SACFFNet. The SACFFNet is designed by integrating Shuffle Attention Network (SA-Net) with Convolutional Neural Network (CNN) with the modification of layers based on Fractional Calculus (FC). Additionally, SACFFNet has attained 91.678% of accuracy, 92.884% of sensitivity, and 92.090% of specificity.

云计算是信息技术(IT)的巨大演变,它以最少的维护和成本为最终用户提供虚拟化和可伸缩的资源。然而,这种环境容易受到各种攻击。这些攻击会造成严重的破坏,并影响云的性能。因此,及时检测云计算中的攻击至关重要。因此,需要一种高效的检测模型来识别云计算中的各种攻击。在本研究中,提出了Shuffle注意卷积前向分数网络(SACFFNet)来检测云计算中的攻击。首先,模拟云,然后从某些数据集获得记录的日志文件。然后,采用最小-最大方法完成特征缩放。然后,使用弦距和支持向量机递归特征消除(SVM-RFE)选择特征。然后,利用合成少数派过采样技术(SMOTE)进行数据增强。最后,采用新设计的SACFFNet检测攻击。SACFFNet是将洗牌注意网络(SA-Net)与卷积神经网络(CNN)相结合,并基于分数阶微积分(FC)进行层修改而设计的。SACFFNet的准确率为91.678%,灵敏度为92.884%,特异性为92.090%。
{"title":"SACFFNet: Shuffle Attention Convolutional Forward Fractional Network Based Attack Detection in Cloud Computing","authors":"Aditya Kumar Shukla,&nbsp;Ashish Sharma,&nbsp;Sandeep Singh Sengar","doi":"10.1002/cpe.70504","DOIUrl":"10.1002/cpe.70504","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing is the massive evolution in Information Technology (IT) that provides virtualized and scalable resources to an end-user with minimum maintenance and cost. However, this environment is vulnerable to various attacks. These attacks impose heavy damages and affect the performance of the cloud. Therefore, timely detection of attacks in cloud computing is crucial. Hence, an efficient detection model is required for identifying various attacks in cloud computing. In this research, Shuffle Attention Convolutional Forward Fractional Network (SACFFNet) is presented to detect attacks in cloud computing. Initially, the cloud is simulated and next, a recorded log file is obtained from certain datasets. Thereafter, feature scaling is accomplished employing the minimum-maximum method. Then, features are selected using chord distance and Support Vector Machine Recursive Feature Elimination (SVM-RFE). Afterwards, data augmentation is done utilizing Synthetic Minority Oversampling Technique (SMOTE). Lastly, attacks are detected employing the newly designed SACFFNet. The SACFFNet is designed by integrating Shuffle Attention Network (SA-Net) with Convolutional Neural Network (CNN) with the modification of layers based on Fractional Calculus (FC). Additionally, SACFFNet has attained 91.678% of accuracy, 92.884% of sensitivity, and 92.090% of specificity.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DH-Chain: Double Heap Based Blockchain Sharding Framework DH-Chain:基于双堆的区块链分片框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70586
Chen Wang, Hu Xia, Jianbin Gao, Qi Xia

Blockchain sharding has emerged as a promising solution to address the scalability challenges of modern blockchain systems, yet in practice, existing sharding systems still suffer from serious performance bottlenecks. In recent work, LB-Chain proposes a framework for load balancing by dynamically migrating hot accounts, which significantly improves throughput and latency. However, the approach still suffers from the computational overhead associated with frequent full ordering, which limits its efficiency in large-scale systems. To address this issue, this paper proposes an enhanced sharding framework, DH-Chain, which improves computational efficiency by introducing heap sorting optimization, dynamically managing the load of sharding and accounts, and avoiding full-volume sorting during each round of migration. The experimental results show that DH-Chain achieves 3.2% higher throughput than LB-Chain and 11.4% higher than random allocation, approaching the theoretical upper bound of ideal allocation. By leveraging heap-based sorting and two-phase commit protocols, DH-Chain ensures atomicity and security while reducing computational overhead. The framework effectively balances shard loads, maintaining consistent performance across varying transaction loads and demonstrating robust scalability.

区块链分片已经成为解决现代区块链系统可伸缩性挑战的一种很有前途的解决方案,但在实践中,现有的分片系统仍然存在严重的性能瓶颈。在最近的工作中,LB-Chain提出了一个通过动态迁移热账户来实现负载平衡的框架,这大大提高了吞吐量和延迟。然而,该方法仍然受到与频繁的全排序相关的计算开销的影响,这限制了其在大规模系统中的效率。为了解决这个问题,本文提出了一种增强的分片框架DH-Chain,该框架通过引入堆排序优化、动态管理分片和帐户的负载以及避免每轮迁移期间的全卷排序来提高计算效率。实验结果表明,DH-Chain的吞吐量比LB-Chain高3.2%,比随机分配高11.4%,接近理想分配的理论上限。通过利用基于堆的排序和两阶段提交协议,DH-Chain确保了原子性和安全性,同时减少了计算开销。该框架有效地平衡了分片负载,在不同的事务负载中保持一致的性能,并展示了强大的可扩展性。
{"title":"DH-Chain: Double Heap Based Blockchain Sharding Framework","authors":"Chen Wang,&nbsp;Hu Xia,&nbsp;Jianbin Gao,&nbsp;Qi Xia","doi":"10.1002/cpe.70586","DOIUrl":"10.1002/cpe.70586","url":null,"abstract":"<div>\u0000 \u0000 <p>Blockchain sharding has emerged as a promising solution to address the scalability challenges of modern blockchain systems, yet in practice, existing sharding systems still suffer from serious performance bottlenecks. In recent work, LB-Chain proposes a framework for load balancing by dynamically migrating hot accounts, which significantly improves throughput and latency. However, the approach still suffers from the computational overhead associated with frequent full ordering, which limits its efficiency in large-scale systems. To address this issue, this paper proposes an enhanced sharding framework, DH-Chain, which improves computational efficiency by introducing heap sorting optimization, dynamically managing the load of sharding and accounts, and avoiding full-volume sorting during each round of migration. The experimental results show that DH-Chain achieves 3.2% higher throughput than LB-Chain and 11.4% higher than random allocation, approaching the theoretical upper bound of ideal allocation. By leveraging heap-based sorting and two-phase commit protocols, DH-Chain ensures atomicity and security while reducing computational overhead. The framework effectively balances shard loads, maintaining consistent performance across varying transaction loads and demonstrating robust scalability.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InEMMiner: An Incremental Learning Model for Exceptional Model Mining Using Optimized Deep Learning InEMMiner:使用优化深度学习的例外模型挖掘的增量学习模型
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70530
Neethu John, J. R. Jeba

Data mining is a technique that involves evaluating big data in order to reveal patterns or connections that can aid in solving business problems through the analysis of data. Exceptional model mining (EMM) is a system for finding local patterns in a dataset by identifying subgroups whose behavior deviates substantially from that which is anticipated. Incremental learning allows the model to continuously learn from new examples while retaining prior knowledge. In the instance of EMM, incremental learning increases the process by adding new training samples, which helps to enhance the model's performance over time. This research presents an incremental learning approach in EMMiner (InEMMiner) using a deep learning model. A novel hybrid optimizing algorithm, double exponential-running city game optimizer (DE-RCGO), is proposed by putting together double exponential smoothing (DES) and running city game optimizer (RCGO) for the purpose of finding outstanding subgroups in within-individual clusters. To assess quality of these subgroups, four different metrics are suggested: (1) a combination of weighted Kraskov entropy with a support vector machine (SVM), (2) a combination of weighted Fuzzy entropy with an Autoencoder, (3) a combination of weighted Boltzmann entropy with a gated recurrent unit (GRU), and (4) a metric that combines weighted Gibbs entropy with isolation forest. Such measures are incorporated in the EMMiner system in order to effectively filter and prioritize most appropriate exceptional subgroups with an improvement on overall EMM task performance. InEMMiner incorporates an incremental learning approach using a deep learning model, TabNet, which is further optimized by the proposed DE-RCGO algorithm. Experimental results validate the efficiency of the model, with quality metric achieving a value of 98.9%, memory usage of 985.65 MB, and a computational time of 81.52 s.

数据挖掘是一种评估大数据的技术,目的是揭示模式或联系,从而通过数据分析帮助解决业务问题。异常模型挖掘(EMM)是一种通过识别行为严重偏离预期的子组来发现数据集中局部模式的系统。增量学习允许模型在保留先验知识的同时不断地从新的例子中学习。在EMM的实例中,增量学习通过添加新的训练样本来增加过程,这有助于随着时间的推移提高模型的性能。本研究提出了一种使用深度学习模型的EMMiner (InEMMiner)增量学习方法。将双指数平滑算法(DES)和运行城市博弈优化算法(RCGO)结合在一起,提出了一种新的混合优化算法——双指数运行城市博弈优化算法(DE-RCGO)。为了评估这些子群的质量,提出了四种不同的度量:(1)加权Kraskov熵与支持向量机(SVM)的组合,(2)加权模糊熵与自动编码器的组合,(3)加权Boltzmann熵与门控循环单元(GRU)的组合,以及(4)加权Gibbs熵与隔离森林的组合度量。这些措施被纳入EMMiner系统,以便有效地过滤和优先考虑最合适的例外子组,提高整体EMM任务性能。InEMMiner结合了一种使用深度学习模型TabNet的增量学习方法,该模型通过提出的DE-RCGO算法进一步优化。实验结果验证了该模型的有效性,质量指标达到98.9%,内存使用率为985.65 MB,计算时间为81.52 s。
{"title":"InEMMiner: An Incremental Learning Model for Exceptional Model Mining Using Optimized Deep Learning","authors":"Neethu John,&nbsp;J. R. Jeba","doi":"10.1002/cpe.70530","DOIUrl":"10.1002/cpe.70530","url":null,"abstract":"<div>\u0000 \u0000 <p>Data mining is a technique that involves evaluating big data in order to reveal patterns or connections that can aid in solving business problems through the analysis of data. Exceptional model mining (EMM) is a system for finding local patterns in a dataset by identifying subgroups whose behavior deviates substantially from that which is anticipated. Incremental learning allows the model to continuously learn from new examples while retaining prior knowledge. In the instance of EMM, incremental learning increases the process by adding new training samples, which helps to enhance the model's performance over time. This research presents an incremental learning approach in EMMiner (InEMMiner) using a deep learning model. A novel hybrid optimizing algorithm, double exponential-running city game optimizer (DE-RCGO), is proposed by putting together double exponential smoothing (DES) and running city game optimizer (RCGO) for the purpose of finding outstanding subgroups in within-individual clusters. To assess quality of these subgroups, four different metrics are suggested: (1) a combination of weighted Kraskov entropy with a support vector machine (SVM), (2) a combination of weighted Fuzzy entropy with an Autoencoder, (3) a combination of weighted Boltzmann entropy with a gated recurrent unit (GRU), and (4) a metric that combines weighted Gibbs entropy with isolation forest. Such measures are incorporated in the EMMiner system in order to effectively filter and prioritize most appropriate exceptional subgroups with an improvement on overall EMM task performance. InEMMiner incorporates an incremental learning approach using a deep learning model, TabNet, which is further optimized by the proposed DE-RCGO algorithm. Experimental results validate the efficiency of the model, with quality metric achieving a value of 98.9%, memory usage of 985.65 MB, and a computational time of 81.52 s.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNEKA: An Algorithm for SDN Controller Placement Based on Graph Convolutional Networks 基于图卷积网络的SDN控制器布局算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-04 DOI: 10.1002/cpe.70595
Yirui Rao, jue Chen, Xihe Qiu

With the rapid development of software-defined networking (SDN), the single-controller architecture is unable to meet the performance and reliability requirements of the whole system. Consequently, a distributed multicontroller architecture has been proposed, in which the number and locations of controllers must be determined rigorously, formulating the controller placement problem (CPP). In order to solve the CPP by optimizing the propagation latency, we propose a convolutional node embedding and K-means algorithm (CNEKA), which integrates information propagation among adjacent nodes and calculation of embedding vectors by graph convolutional networks (GCNs) with the graph segmentation by K-means algorithm. As far as we know, we are the first paper to apply GCN to solving CPP. The study demonstrates that the CNEKA algorithm significantly enhances performance in optimizing average and worst-case latency between controllers and switches, as well as the propagation latency of the whole network, in varied experimental conditions. CNEKA achieves up to a 64.18% reduction in propagation latency when compared to other algorithms, and maintains a high stability with a fluctuation less than 7.6% when repeating the same experiments for several times. Moreover, CNEKA can always find the optimal or near-optimal solution with an error less than 11.15% when compared with the global optimal solution.

随着软件定义网络(SDN)的快速发展,单控制器架构已经无法满足整个系统对性能和可靠性的要求。因此,提出了一种分布式多控制器体系结构,其中必须严格确定控制器的数量和位置,形成控制器放置问题(CPP)。为了通过优化传播延迟来解决CPP问题,我们提出了一种卷积节点嵌入和K-means算法(CNEKA),该算法将图卷积网络(GCNs)在相邻节点之间的信息传播和嵌入向量的计算与K-means算法的图分割相结合。据我们所知,我们是第一个将GCN应用于求解CPP的论文。研究表明,在不同的实验条件下,CNEKA算法在优化控制器和交换机之间的平均和最坏情况延迟以及整个网络的传播延迟方面显著提高了性能。与其他算法相比,CNEKA的传播延迟降低了64.18%,并且在多次重复相同实验时保持了较高的稳定性,波动小于7.6%。与全局最优解相比,CNEKA总能找到最优或近最优解,误差小于11.15%。
{"title":"CNEKA: An Algorithm for SDN Controller Placement Based on Graph Convolutional Networks","authors":"Yirui Rao,&nbsp;jue Chen,&nbsp;Xihe Qiu","doi":"10.1002/cpe.70595","DOIUrl":"10.1002/cpe.70595","url":null,"abstract":"<div>\u0000 \u0000 <p>With the rapid development of software-defined networking (SDN), the single-controller architecture is unable to meet the performance and reliability requirements of the whole system. Consequently, a distributed multicontroller architecture has been proposed, in which the number and locations of controllers must be determined rigorously, formulating the controller placement problem (CPP). In order to solve the CPP by optimizing the propagation latency, we propose a convolutional node embedding and K-means algorithm (CNEKA), which integrates information propagation among adjacent nodes and calculation of embedding vectors by graph convolutional networks (GCNs) with the graph segmentation by K-means algorithm. As far as we know, we are the first paper to apply GCN to solving CPP. The study demonstrates that the CNEKA algorithm significantly enhances performance in optimizing average and worst-case latency between controllers and switches, as well as the propagation latency of the whole network, in varied experimental conditions. CNEKA achieves up to a 64.18% reduction in propagation latency when compared to other algorithms, and maintains a high stability with a fluctuation less than 7.6% when repeating the same experiments for several times. Moreover, CNEKA can always find the optimal or near-optimal solution with an error less than 11.15% when compared with the global optimal solution.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146176147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Layered Analysis of Energy Consumption in Spark 星火系统能耗的多层分析
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-03 DOI: 10.1002/cpe.70565
Nestor D. O. Volpini, Vinícius Dias, Dorgival Guedes

Although energy has become a major concern in data processing systems, it is usually hard to get a deep understanding of how performance and energy consumption relate to each other when planning how to configure a computing environment to execute a specific data-oriented workload. In this paper, we propose a multi-layered methodology to analyze the energy consumption of big data workloads executed using Apache Spark in virtualized cloud environments. The approach is structured into three layers: Resource provisioning, system-level resource utilization, and application-level resource utilization. Using direct energy measurements using a Power Distribution Unit (PDU) and detailed system monitoring, the study investigates how infrastructure choices and workload characteristics influence energy consumption. Results show that optimal virtual machine configurations depend on workload type and input size; while provisioning decisions affect energy consumption, system-level metrics such as CPU utilization and disk I/O offer a deeper understanding of the final performance versus energy consumption results. By applying our methodology, our results reveal the impact of task distribution and resource under-utilization on overall energy efficiency. The findings demonstrate that energy optimization in big data environments requires a comprehensive understanding of factors across infrastructure, system, and application layers. The proposed methodology serves as a practical guide for energy-aware design and decision-making in cloud-based data processing systems.

尽管能源已成为数据处理系统中的一个主要问题,但在规划如何配置计算环境以执行特定的面向数据的工作负载时,通常很难深入了解性能和能耗之间的关系。在本文中,我们提出了一种多层方法来分析在虚拟化云环境中使用Apache Spark执行的大数据工作负载的能耗。该方法分为三层:资源供应、系统级资源利用和应用程序级资源利用。通过使用配电单元(PDU)的直接能量测量和详细的系统监控,该研究调查了基础设施选择和工作负载特征如何影响能源消耗。结果表明,最优虚拟机配置取决于工作负载类型和输入大小;虽然供应决策会影响能耗,但系统级指标(如CPU利用率和磁盘I/O)可以更深入地了解最终性能与能耗结果之间的关系。通过应用我们的方法,我们的结果揭示了任务分配和资源利用不足对整体能源效率的影响。研究结果表明,大数据环境下的能源优化需要全面了解基础设施、系统和应用层的因素。所提出的方法可作为基于云的数据处理系统中能源意识设计和决策的实用指南。
{"title":"A Multi-Layered Analysis of Energy Consumption in Spark","authors":"Nestor D. O. Volpini,&nbsp;Vinícius Dias,&nbsp;Dorgival Guedes","doi":"10.1002/cpe.70565","DOIUrl":"10.1002/cpe.70565","url":null,"abstract":"<p>Although energy has become a major concern in data processing systems, it is usually hard to get a deep understanding of how performance and energy consumption relate to each other when planning how to configure a computing environment to execute a specific data-oriented workload. In this paper, we propose a multi-layered methodology to analyze the energy consumption of big data workloads executed using Apache Spark in virtualized cloud environments. The approach is structured into three layers: Resource provisioning, system-level resource utilization, and application-level resource utilization. Using direct energy measurements using a Power Distribution Unit (PDU) and detailed system monitoring, the study investigates how infrastructure choices and workload characteristics influence energy consumption. Results show that optimal virtual machine configurations depend on workload type and input size; while provisioning decisions affect energy consumption, system-level metrics such as CPU utilization and disk I/O offer a deeper understanding of the final performance versus energy consumption results. By applying our methodology, our results reveal the impact of task distribution and resource under-utilization on overall energy efficiency. The findings demonstrate that energy optimization in big data environments requires a comprehensive understanding of factors across infrastructure, system, and application layers. The proposed methodology serves as a practical guide for energy-aware design and decision-making in cloud-based data processing systems.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146154624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging LLMs for Smart Cities Qualitative Data Analysis 利用法学硕士进行智慧城市定性数据分析
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-03 DOI: 10.1002/cpe.70547
Elisa Covato, Kamran Soomro, Zaheer Khan, Muhammad Bilal, Roisin Cormack, Samuel Green

Public authorities frequently conduct surveys and analyze data from citizens, a process that is often labor-intensive when performed manually. This paper explores how generative artificial intelligence (GenAI) can assist in automating data analysis for public authorities. In this respect, we investigate the potential of large language models (LLMs) to perform sentiment analysis and summarization of unstructured data as smart services. Using data from the East Bristol livable neighborhood (EBLN) as a case study, we assess the accuracy and precision of these models and validate the results against ground truth data and expert evaluations. Our findings indicate that sentiment classification achieved over 90% accuracy. Additionally, incorporating retrieval-augmented generation (RAG) further enhances the results obtained. Comparative analysis revealed that LLMs achieved superior performance over traditional summarization techniques based on domain expert evaluation. These results suggest that LLMs offer a promising and impactful approach to improving the efficiency of qualitative analysis, though further research is required to enhance their accuracy and usefulness.

公共当局经常进行调查和分析来自公民的数据,这一过程在手动执行时通常是劳动密集型的。本文探讨了生成式人工智能(GenAI)如何协助公共当局进行自动化数据分析。在这方面,我们研究了大型语言模型(llm)作为智能服务执行情感分析和非结构化数据总结的潜力。以东布里斯托尔宜居社区(EBLN)的数据为例,我们评估了这些模型的准确性和精度,并根据实际数据和专家评估验证了结果。我们的研究结果表明,情绪分类的准确率超过90%。此外,结合检索增强生成(RAG)进一步提高了所获得的结果。对比分析表明,llm比基于领域专家评价的传统摘要技术取得了更好的性能。这些结果表明,法学硕士为提高定性分析的效率提供了一种有前途和有影响力的方法,尽管需要进一步的研究来提高其准确性和实用性。
{"title":"Leveraging LLMs for Smart Cities Qualitative Data Analysis","authors":"Elisa Covato,&nbsp;Kamran Soomro,&nbsp;Zaheer Khan,&nbsp;Muhammad Bilal,&nbsp;Roisin Cormack,&nbsp;Samuel Green","doi":"10.1002/cpe.70547","DOIUrl":"10.1002/cpe.70547","url":null,"abstract":"<p>Public authorities frequently conduct surveys and analyze data from citizens, a process that is often labor-intensive when performed manually. This paper explores how generative artificial intelligence (GenAI) can assist in automating data analysis for public authorities. In this respect, we investigate the potential of large language models (LLMs) to perform sentiment analysis and summarization of unstructured data as smart services. Using data from the East Bristol livable neighborhood (EBLN) as a case study, we assess the accuracy and precision of these models and validate the results against ground truth data and expert evaluations. Our findings indicate that sentiment classification achieved over 90% accuracy. Additionally, incorporating retrieval-augmented generation (RAG) further enhances the results obtained. Comparative analysis revealed that LLMs achieved superior performance over traditional summarization techniques based on domain expert evaluation. These results suggest that LLMs offer a promising and impactful approach to improving the efficiency of qualitative analysis, though further research is required to enhance their accuracy and usefulness.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70547","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146154626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Predictive Modelling of Tick Distribution: A Machine Learning Approach to Ixodes ricinus Abundance” 更正“蜱虫分布的预测建模:蓖麻伊蚊丰度的机器学习方法”
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-03 DOI: 10.1002/cpe.70600

K. Jamalpuram, M. S. Sharif, A. Nanmi, et al., “ Predictive Modelling of Tick Distribution: A Machine Learning Approach to Ixodes ricinus Abundance,” Concurrency and Computation: Practice and Experience 38, no. 1 (2026): e70496, https://doi.org/10.1002/cpe.70496.

The university name is missing for affiliation number 3 in the published article. Therefore, affiliation 3 for Ahmed Ibrahim Alzahrani and Nasser Alalwan should be updated to:

Department of Computer Science and Engineering, College of Applied Studies, King Saud University, Riyadh, Saudi Arabia.

We apologize for this error.

K. Jamalpuram, M. S. Sharif, A. Nanmi, et .,“蜱虫分布的预测建模:一种用于蓖麻蜱丰度的机器学习方法”,并发与计算:实践与经验38,第2期。1 (2026): e70496, https://doi.org/10.1002/cpe.70496.The在发表的文章中,隶属关系号3缺少大学名称。因此,Ahmed Ibrahim Alzahrani和Nasser Alalwan的隶属关系应该更新为:沙特阿拉伯利雅得沙特国王大学应用研究学院计算机科学与工程系。我们为这个错误道歉。
{"title":"Correction to “Predictive Modelling of Tick Distribution: A Machine Learning Approach to Ixodes ricinus Abundance”","authors":"","doi":"10.1002/cpe.70600","DOIUrl":"10.1002/cpe.70600","url":null,"abstract":"<p>\u0000 <span>K. Jamalpuram</span>, <span>M. S. Sharif</span>, <span>A. Nanmi</span>, et al., “ <span>Predictive Modelling of Tick Distribution: A Machine Learning Approach to <i>Ixodes ricinus</i> Abundance</span>,” <i>Concurrency and Computation: Practice and Experience</i> <span>38</span>, no. <span>1</span> (<span>2026</span>): e70496, https://doi.org/10.1002/cpe.70496.</p><p>The university name is missing for affiliation number 3 in the published article. Therefore, affiliation 3 for Ahmed Ibrahim Alzahrani and Nasser Alalwan should be updated to:</p><p>Department of Computer Science and Engineering, College of Applied Studies, King Saud University, Riyadh, Saudi Arabia.</p><p>We apologize for this error.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70600","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146154625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable SoC Architecture for Parallel-Pixel Classification of Hyperspectral Images Using Weighted-Summation Kernel SVM 基于加权求和核支持向量机的高光谱图像并行像素分类的可扩展SoC架构
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-03 DOI: 10.1002/cpe.70558
B. B. Shabarinath, Muralidhar Pullakandam

Hyperspectral image (HSI) classification demands high-performance processing and accuracy due to data dimensionality and computational complexity. The support vector machine (SVM) algorithm, which is renowned for its ability to manage high-dimensional data through kernel techniques for classification tasks effectively, has shown promising results in HSI analysis and applications. Composite kernels enhance SVM efficacy by integrating multiple kernels to capture diverse data characteristics. Existing SVM algorithms based on composite kernels are susceptible to high latency, computationally intensive operations, and poor scalability, which makes it challenging to deploy efficiently on resource-limited high-performance platforms like field-programmable gate arrays (FPGAs). The paper proposes a novel composite weighted-summation kernel (WSK) that combines polynomial and hardware friendly kernels (HFKs). The WSK kernel has been integrated with SVM, resulting in a classifier named WSK-SVM. Furthermore, a scalable FPGA-based system on chip (SoC) architecture is developed for HSI classification using WSK-SVM. The architecture implements a fast kernel approximation method to compute a HFK which eliminates multiplications. The architecture leverages spatial and temporal parallelism, efficiently aggregating processing into clusters of parallel binary WSK-SVM classifiers using the one-vs-one (OvO) methodology. The approach is scalable and achieves a level of spatial parallelism that facilitates the concurrent classification of multiple pixels. Two different SoC boards were used to implement the design. One board was based on the Xilinx Zynq-7000 and featured four parallel classifiers, and the second board was based on the Zynq UltraScale+ MPSoC and featured eleven parallel classifiers. High classification accuracy (>∖,99%) was achieved on benchmark datasets (Indian Pines, Pavia Centre, and Pavia University) using 8-bit fixed-point processing, resulting in negligible loss of floating-point accuracy. Resource utilization on the PYNQ-Z2 reached 78.37% for LUTs and 77.14% for BRAM, whereas on the ZCU-104, it was 49.76% for LUTs and 95.19% for BRAM. Real-time latency was as low as 1.26 μs$$ mu mathrm{s} $$ per pixel (ZCU-104, Pavia Centre dataset) with a throughput of up to 6.35 MBPS. The proposed architecture provides scalable and high-performance solutions for onboard HSI classification under real-time constraints.

由于高光谱图像的数据维数和计算复杂度,高光谱图像分类需要高性能的处理和精度。支持向量机(SVM)算法以其通过核技术有效地管理高维数据进行分类任务的能力而闻名,在HSI分析和应用中显示出有希望的结果。复合核通过集成多个核来捕获不同的数据特征,从而提高支持向量机的有效性。现有基于复合核的支持向量机算法存在高延迟、计算量大、可扩展性差等问题,难以在现场可编程门阵列(fpga)等资源有限的高性能平台上高效部署。提出了一种将多项式核和硬件友好核相结合的复合加权求和核。将WSK内核与SVM集成在一起,形成了WSK-SVM分类器。在此基础上,利用WSK-SVM开发了一种可扩展的基于fpga的片上系统(SoC)架构,用于HSI分类。该体系结构实现了一种快速核近似方法来计算HFK,消除了乘法。该架构利用空间和时间并行性,使用一对一(OvO)方法有效地将处理聚合到并行二进制WSK-SVM分类器的集群中。该方法具有可扩展性,并实现了一定程度的空间并行性,从而促进了多个像素的并发分类。两个不同的SoC板被用来实现设计。一块主板基于Xilinx Zynq-7000,具有四个并行分类器,另一块主板基于Zynq UltraScale+ MPSoC,具有11个并行分类器。分类精度高(&gt;∑,99%) was achieved on benchmark datasets (Indian Pines, Pavia Centre, and Pavia University) using 8-bit fixed-point processing, resulting in negligible loss of floating-point accuracy. Resource utilization on the PYNQ-Z2 reached 78.37% for LUTs and 77.14% for BRAM, whereas on the ZCU-104, it was 49.76% for LUTs and 95.19% for BRAM. Real-time latency was as low as 1.26  μ s $$ mu mathrm{s} $$ per pixel (ZCU-104, Pavia Centre dataset) with a throughput of up to 6.35 MBPS. The proposed architecture provides scalable and high-performance solutions for onboard HSI classification under real-time constraints.
{"title":"Scalable SoC Architecture for Parallel-Pixel Classification of Hyperspectral Images Using Weighted-Summation Kernel SVM","authors":"B. B. Shabarinath,&nbsp;Muralidhar Pullakandam","doi":"10.1002/cpe.70558","DOIUrl":"10.1002/cpe.70558","url":null,"abstract":"<div>\u0000 \u0000 <p>Hyperspectral image (HSI) classification demands high-performance processing and accuracy due to data dimensionality and computational complexity. The support vector machine (SVM) algorithm, which is renowned for its ability to manage high-dimensional data through kernel techniques for classification tasks effectively, has shown promising results in HSI analysis and applications. Composite kernels enhance SVM efficacy by integrating multiple kernels to capture diverse data characteristics. Existing SVM algorithms based on composite kernels are susceptible to high latency, computationally intensive operations, and poor scalability, which makes it challenging to deploy efficiently on resource-limited high-performance platforms like field-programmable gate arrays (FPGAs). The paper proposes a novel composite weighted-summation kernel (WSK) that combines polynomial and hardware friendly kernels (HFKs). The WSK kernel has been integrated with SVM, resulting in a classifier named WSK-SVM. Furthermore, a scalable FPGA-based system on chip (SoC) architecture is developed for HSI classification using WSK-SVM. The architecture implements a fast kernel approximation method to compute a HFK which eliminates multiplications. The architecture leverages spatial and temporal parallelism, efficiently aggregating processing into clusters of parallel binary WSK-SVM classifiers using the one-vs-one (OvO) methodology. The approach is scalable and achieves a level of spatial parallelism that facilitates the concurrent classification of multiple pixels. Two different SoC boards were used to implement the design. One board was based on the Xilinx Zynq-7000 and featured four parallel classifiers, and the second board was based on the Zynq UltraScale+ MPSoC and featured eleven parallel classifiers. High classification accuracy (&gt;∖,99%) was achieved on benchmark datasets (Indian Pines, Pavia Centre, and Pavia University) using 8-bit fixed-point processing, resulting in negligible loss of floating-point accuracy. Resource utilization on the PYNQ-Z2 reached 78.37% for LUTs and 77.14% for BRAM, whereas on the ZCU-104, it was 49.76% for LUTs and 95.19% for BRAM. Real-time latency was as low as 1.26 <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>μ</mi>\u0000 <mi>s</mi>\u0000 </mrow>\u0000 <annotation>$$ mu mathrm{s} $$</annotation>\u0000 </semantics></math> per pixel (ZCU-104, Pavia Centre dataset) with a throughput of up to 6.35 MBPS. The proposed architecture provides scalable and high-performance solutions for onboard HSI classification under real-time constraints.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 3","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1