首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
High-throughput systolic array-based accelerator for hybrid transformer-CNN networks 基于高通量收缩阵列的混合变压器-网络加速器
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102194
Qingzeng Song , Yao Dai , Hao Lu , Guanghao Jin
In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a 12× improvement in energy efficiency compared to the CPU, a 2.3× improvement compared to the GPU, and a 1.5× to 2× improvement compared to existing accelerators regarding speed and energy efficiency.
在变压器取得巨大成功的今天,卷积神经网络(CNN)仍然非常重要和有用。事实上,结合了变形器和 CNN 两种方法优点的混合变形器-CNN 网络架构已经取得了令人瞩目的成果。视觉变换器(ViT)是一种重要的神经网络架构,其第一层为卷积层,主要建立在变换器框架之上。然而,由于注意力和卷积的固有计算模式不同,这两种模型的现有硬件加速器通常是分开设计的,缺乏一种统一的方法来高效地加速这两种模型。在本文中,我们在现场可编程门阵列(FPGA)平台上提出了一种专用加速器。该加速器集成了一个可配置的三维收缩阵列,专门用于加速混合变换器-CNN 网络的推理能力。通过统一矩阵乘法运算,卷积和变换器计算可以映射到合成阵列中。在混合变换器-CNN 网络中经常使用的 Softmax 和 LayerNorm 也在 FPGA 板上实现。加速器实现了高性能,峰值吞吐量为 722 GOP/s,平均能效为 53 GOPS/W。ViT-Base、ViT-Small 和 ViT-Tiny 的计算延迟分别为 51.3 毫秒、18.1 毫秒和 6.8 毫秒。与 CPU 相比,该加速器的能效提高了 12 倍;与 GPU 相比,提高了 2.3 倍;与现有加速器相比,在速度和能效方面提高了 1.5 倍至 2 倍。
{"title":"High-throughput systolic array-based accelerator for hybrid transformer-CNN networks","authors":"Qingzeng Song ,&nbsp;Yao Dai ,&nbsp;Hao Lu ,&nbsp;Guanghao Jin","doi":"10.1016/j.jksuci.2024.102194","DOIUrl":"10.1016/j.jksuci.2024.102194","url":null,"abstract":"<div><div>In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a <span><math><mrow><mn>12</mn><mo>×</mo></mrow></math></span> improvement in energy efficiency compared to the CPU, a <span><math><mrow><mn>2</mn><mo>.</mo><mn>3</mn><mo>×</mo></mrow></math></span> improvement compared to the GPU, and a <span><math><mrow><mn>1</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> to <span><math><mrow><mn>2</mn><mo>×</mo></mrow></math></span> improvement compared to existing accelerators regarding speed and energy efficiency.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102194"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable attention network for lightweight image super-resolution 用于轻量级图像超分辨率的可扩展注意力网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102185
Jinsheng Fang , Xinyu Chen , Jianglong Zhao , Kun Zeng
Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at https://github.com/ginsengf/SCAN.
建立特征之间的长程依赖关系模型已成为改善单图像超分辨率(SISR)结果的共识,这激发了人们对扩大卷积神经网络(CNN)内核大小的兴趣。虽然增大内核肯定会提高网络性能,但网络参数和计算复杂度也会大幅提高。因此,需要对内核大小的设置进行优化,以提高网络的效率。在这项工作中,我们研究了较大内核的位置对网络性能的影响,并提出了一种可扩展的注意力网络(SCAN)。在 SCAN 中,我们提出了一种深度相关注意力块(DRAB),它由多个多尺度信息增强块(MIEB)和可调整大小的内核注意力块(RKAB)组成。RKAB 可根据 DRAB 在网络中的位置动态调整内核大小。这种可调整大小的机制允许网络在较浅的层中用较大的内核提取更多的信息特征,而在较深的层中用较小的内核关注有用的信息,从而有效地改善了 SR 结果。大量实验证明,所提出的 SCAN 优于其他最先进的轻量级 SR 方法。我们的代码见 https://github.com/ginsengf/SCAN。
{"title":"A scalable attention network for lightweight image super-resolution","authors":"Jinsheng Fang ,&nbsp;Xinyu Chen ,&nbsp;Jianglong Zhao ,&nbsp;Kun Zeng","doi":"10.1016/j.jksuci.2024.102185","DOIUrl":"10.1016/j.jksuci.2024.102185","url":null,"abstract":"<div><div>Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at <span><span>https://github.com/ginsengf/SCAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102185"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing requirements-to-code traceability with GA-XWCoDe: Integrating XGBoost, Node2Vec, and genetic algorithms for improving model performance and stability 利用 GA-XWCoDe 增强从需求到代码的可追溯性:集成 XGBoost、Node2Vec 和遗传算法,提高模型性能和稳定性
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102197
Zhiyuan Zou , Bangchao Wang , Xinrong Hu , Yang Deng , Hongyan Wan , Huan Jin
This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of α¡0.01 and demonstrates exceptional performance and stability across various training data scales.
本研究针对需求到代码的可追溯性所面临的挑战,提出了一种新的模型--代码依赖性遗传算法-XGBoost(GA-XWCoDe),该模型集成了 eXtreme Gradient Boosting(XGBoost)、Node2Vec 模型加权代码依赖性策略和参数优化遗传算法。XGBoost 可减轻过度拟合并增强模型稳定性,而 Node2Vec 则可提高低置信度链接的预测准确性。遗传算法用于有效优化模型参数,降低了传统方法的资源强度。实验结果表明,就四个数据集的平均 F1 性能而言,GA-XWCoDe 比最先进的 TRAceability lInk cLassifier(TRAIL)方法高出 17.44%,比需求可追溯性深林(DF4RT)方法高出 33.36%。在置信度为 α¡0.01 时,它明显优于所有基线方法,并在各种训练数据规模下表现出卓越的性能和稳定性。
{"title":"Enhancing requirements-to-code traceability with GA-XWCoDe: Integrating XGBoost, Node2Vec, and genetic algorithms for improving model performance and stability","authors":"Zhiyuan Zou ,&nbsp;Bangchao Wang ,&nbsp;Xinrong Hu ,&nbsp;Yang Deng ,&nbsp;Hongyan Wan ,&nbsp;Huan Jin","doi":"10.1016/j.jksuci.2024.102197","DOIUrl":"10.1016/j.jksuci.2024.102197","url":null,"abstract":"<div><div>This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of <span><math><mi>α</mi></math></span>¡0.01 and demonstrates exceptional performance and stability across various training data scales.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102197"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and robust JND-guided video watermarking scheme in spatial domain 空间域快速稳健的 JND 引导视频水印方案
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-30 DOI: 10.1016/j.jksuci.2024.102199
Antonio Cedillo-Hernandez , Lydia Velazquez-Garcia , Manuel Cedillo-Hernandez , David Conchouso-Gonzalez
Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.
一般来说,使用空间域进行的水印研究往往速度快,但鲁棒性和不可感知性有限,而使用其他变换域进行的水印研究鲁棒性强,但计算成本高。数字视频水印技术面临的主要挑战之一是,由于需要处理的信息量巨大,因此需要大量的计算能力。本文针对这一问题提出了一种数字视频水印算法。为了提高速度,我们采用了一种在空间域直接修改 DCT 系数的技术来嵌入水印,此外,我们还将视频场景而不是视频帧作为基本单位来执行这一过程。在鲁棒性方面,水印是通过直接在空间域计算的 "刚注意到的失真"(JND)方案调制的,该方案以视觉注意力为导向,将水印强度提高到最大水平,但人眼无法感知这一操作。实验结果证实,与之前的研究相比,所提出的方法在处理时间、鲁棒性和不可感知性方面都取得了显著的性能。
{"title":"Fast and robust JND-guided video watermarking scheme in spatial domain","authors":"Antonio Cedillo-Hernandez ,&nbsp;Lydia Velazquez-Garcia ,&nbsp;Manuel Cedillo-Hernandez ,&nbsp;David Conchouso-Gonzalez","doi":"10.1016/j.jksuci.2024.102199","DOIUrl":"10.1016/j.jksuci.2024.102199","url":null,"abstract":"<div><div>Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102199"},"PeriodicalIF":5.2,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software requirement engineering over the federated environment in distributed software development process 分布式软件开发过程中联合环境下的软件需求工程
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-28 DOI: 10.1016/j.jksuci.2024.102201
Abdulaziz Alhumam, Shakeel Ahmed
In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software entities would work together to develop software, often across organizations and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective. The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.
近年来,随着软件开发流程的快速发展,分布式软件开发(DSD)流程变得越来越普遍。这种转变需要一个强大的软件需求工程(SRE)框架,以便在联合环境中工作。利用联盟环境,多个独立的软件实体将共同开发软件,而且往往跨越组织和地理边界。联合架构的分散结构使需求激发、分析、规范、验证和管理更加有效。建议的模式强调灵活性和敏捷性,在一个多样化的开发框架内利用多个本地化模型的协作。这种协作方法旨在整合每个本地流程的优势,最终创建一个强大的软件原型。通过对电子商务网站和学习管理系统的两个案例研究,对所提出的 DSD 模型的性能进行了评估。通过考虑每个案例研究的不同功能和非功能需求,并使用均方误差 (MSE)、平均绝对误差 (MAE) 和皮尔逊相关系数 (PCC) 等标准化指标分析了所提出模型的性能。结果表明,所提出的模型表现出合理的性能,对功能性和非功能性需求的 MSE 值分别为 0.12 和 0.153,对功能性和非功能性需求的 MAE 值分别为 0.222 和 0.232。
{"title":"Software requirement engineering over the federated environment in distributed software development process","authors":"Abdulaziz Alhumam,&nbsp;Shakeel Ahmed","doi":"10.1016/j.jksuci.2024.102201","DOIUrl":"10.1016/j.jksuci.2024.102201","url":null,"abstract":"<div><div>In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software<!--> <!-->entities would<!--> <!-->work together to develop software, often across organizations<!--> <!-->and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective.<!--> <!-->The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102201"},"PeriodicalIF":5.2,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFEL-Net: A lightweight network to enhance feature for multi-scale pedestrian detection PFEL-Net:用于增强多尺度行人检测特征的轻量级网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.jksuci.2024.102198
Jingwen Tang , Huicheng Lai , Guxue Gao , Tongguan Wang
In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the mAP50 and mAP50:95 by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at https://github.com/1tangbao/PFEL.
在智能社区研究中,行人检测是一项重要而具有挑战性的目标检测任务。行人目标尺度的多样性和周围背景的干扰会导致检测器的错误检测和漏检,而庞大的算法模型又会给检测器的部署带来挑战。针对这些问题,本研究提出了行人特征增强轻量级网络(PFEL-Net),为边缘计算和复杂场景中多尺度行人目标的精确检测提供了可能。首先,设计了并行扩张残差模块来扩大感受野,以获得更丰富的行人特征;然后,设计了选择性双向扩散金字塔网络来精细融合特征,并通过细节特征层捕捉多尺度信息;之后,构建了轻量级共享检测头来轻量化模型头;最后,采用通道剪枝算法,在不影响精度的前提下进一步降低计算复杂度,减小改进模型的大小。在 CityPersons 数据集上,与 YOLOv8 相比,PFEL-Net 的 mAP50 和 mAP50:95 分别提高了 6.3% 和 4.9%,模型参数数量减少了 89%,模型大小压缩了 85%,结果仅为 0.9 MB。同样,在 TinyPerson 数据集上也取得了优异的性能。源代码见 https://github.com/1tangbao/PFEL。
{"title":"PFEL-Net: A lightweight network to enhance feature for multi-scale pedestrian detection","authors":"Jingwen Tang ,&nbsp;Huicheng Lai ,&nbsp;Guxue Gao ,&nbsp;Tongguan Wang","doi":"10.1016/j.jksuci.2024.102198","DOIUrl":"10.1016/j.jksuci.2024.102198","url":null,"abstract":"<div><div>In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span> and <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn><mo>:</mo><mn>95</mn></mrow></msub></mrow></math></span> by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at <span><span>https://github.com/1tangbao/PFEL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102198"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A truthful randomized mechanism for task allocation with multi-attributes in mobile edge computing 移动边缘计算中多属性任务分配的真实随机机制
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.jksuci.2024.102196
Xi Liu , Jun Liu
Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.
移动边缘计算(MEC)旨在将移动设备(MD)的任务卸载到位于网络边缘的 MEC 服务器上,从而缩短移动应用程序的响应时间并降低能耗。需求是多属性的,移动设备和接入点之间的距离会导致所需资源和传输能耗的差异。遗憾的是,现有研究并未同时考虑任务分配和能耗问题。受此启发,本文考虑了多属性的任务分配问题,该问题包括获胜者确定和卸载决策问题。首先,将问题表述为基于拍卖的模型,以提供灵活的服务。然后,设计了一种随机机制,该机制在预期中是真实的。这就促使系统进入一个均衡状态,在此状态下,任何 MD 都没有动机通过宣布一个不真实的值来增加效用。此外,还提出了一种近似算法,以尽量减少远程能耗,这是一种多项式时间近似方案。因此,它实现了优化损失和时间复杂性之间的权衡。仿真结果表明,提出的机制获得了接近最优的分配。此外,与基线方法相比,建议的机制能有效提高社会福利,并为边缘服务器提供商带来更高的收益。
{"title":"A truthful randomized mechanism for task allocation with multi-attributes in mobile edge computing","authors":"Xi Liu ,&nbsp;Jun Liu","doi":"10.1016/j.jksuci.2024.102196","DOIUrl":"10.1016/j.jksuci.2024.102196","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102196"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flow prediction of mountain cities arterial road network for real-time regulation 山区城市干线路网流量预测与实时调控
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-25 DOI: 10.1016/j.jksuci.2024.102190
Xiaoyu Cai , Zimu Li , Jiajia Dai , Liang Lv , Bo Peng
This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.
本研究旨在通过调查影响因素和分析时空交通流分布,加深对干道网络内车辆路径选择行为的理解。利用无线射频识别(RFID)出行数据,确定了出行时长、路线熟悉程度、路线长度、快速路比例、干道比例和匝道比例等关键因素。然后,我们提出了一种起点-终点路径获取方法,并开发了一个基于带样本权重的多叉 Logit 模型的路线选择预测模型。此外,该研究还利用公共道路局函数将交通管制方案与旅行时间联系起来--该函数模型说明了整个网络的旅行时间与交通需求之间的关系,并开发了一个干道网络交通量预测模型。验证结果表明,改进后的多叉 logit 模型的预测准确率从 92.55% 提高到 97.87%。此外,将多车道并线的绿灯时间比从 0.75 降低到 0.5,大大降低了车辆选择该路线的可能性,并减少了通过匝道的车辆数量。流量预测模型的准确率达到 97.9%,准确反映了实际流量变化,确保了机场主干道的顺畅运行。这为制定有效的交通管制计划奠定了坚实的基础。
{"title":"Flow prediction of mountain cities arterial road network for real-time regulation","authors":"Xiaoyu Cai ,&nbsp;Zimu Li ,&nbsp;Jiajia Dai ,&nbsp;Liang Lv ,&nbsp;Bo Peng","doi":"10.1016/j.jksuci.2024.102190","DOIUrl":"10.1016/j.jksuci.2024.102190","url":null,"abstract":"<div><div>This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102190"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The evolution of the flip-it game in cybersecurity: Insights from the past to the future 网络安全翻转游戏的演变:从过去到未来的启示
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-25 DOI: 10.1016/j.jksuci.2024.102195
Mousa Tayseer Jafar , Lu-Xing Yang , Gang Li , Xiaofan Yang
Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.
网络犯罪统计数据凸显了数字威胁对个人和组织的严重影响,而且这种影响还在不断加剧,经济损失也在迅速攀升。随着网络安全成为一项核心挑战,一些现代网络防御战略被证明不足以有效应对复杂攻击者带来的威胁。尽管网络安全技术在不断进步,但许多现有框架往往无法应对精明对手不断变化的战术。随着网络威胁的复杂性和多样性不断增加,人们越来越认识到当前防御战略的不足之处,强调需要更强大和创新的解决方案。要制定有弹性的网络防御战略,模拟复杂的攻击者和系统防御者之间的动态互动仍然至关重要。这种模拟使组织能够预测并有效应对新出现的威胁。Flip-It 游戏是公认的捕捉复杂攻击者和系统防御者之间动态互动的智能模拟游戏。它能够模拟错综复杂的网络场景,使企业能够通过模拟真实世界的网络场景,评估其针对不断演变的威胁的防御能力、分析漏洞并改进应对策略。本文全面分析了网络安全背景下的 "Flip-It "游戏,追溯了它从诞生到未来的发展前景。论文强调了该游戏的重大贡献,并为该领域的学者指出了潜在的未来研究途径。本研究旨在全面了解 Flip-It 游戏的发展过程,为网络安全战略和防御机制方面的研究人员和从业人员提供有价值的资源。
{"title":"The evolution of the flip-it game in cybersecurity: Insights from the past to the future","authors":"Mousa Tayseer Jafar ,&nbsp;Lu-Xing Yang ,&nbsp;Gang Li ,&nbsp;Xiaofan Yang","doi":"10.1016/j.jksuci.2024.102195","DOIUrl":"10.1016/j.jksuci.2024.102195","url":null,"abstract":"<div><div>Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102195"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framework to improve software effort estimation accuracy using novel ensemble rule 利用新型集合规则提高软件工作量估算准确性的框架
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-20 DOI: 10.1016/j.jksuci.2024.102189
Syed Sarmad Ali , Jian Ren , Ji Wu
<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This
这项研究的重点是改进软件工作量估算(SEE),以便在软件行业快速发展的过程中提高项目成果。准确估算是项目成功的基石,对于避免预算超支和最大限度降低项目失败风险至关重要。本文提出的框架解决了对准确估算至关重要的三个重要问题:处理缺失或不充分的数据、选择关键功能以及改进软件工作量模型。我们提出的框架包含三种方法:新颖的不完整值估算模型(NIVIM)、使用元启发式算法(CFS-Meta)的基于相关性特征选择的混合模型以及异构集合模型(HEM)。组合框架通过有效处理缺失数据、优化特征选择和整合不同的预测模型,在不同的项目场景中实现卓越性能,从而协同提高 SEE 的稳健性和准确性。该框架大大减少了估算和特征选择的开销,而集合方法则通过动态加权和元学习优化了模型性能。这就降低了平均绝对误差(MAE),减少了计算复杂性,使其对各种软件数据集更加有效。NIVIM 专为解决 SEE 中普遍存在的不完整数据集而设计。通过变异自动编码器(VAE)整合合成数据方法,该模型结合了上下文相关性和项目固有特征,显著提高了估算精度。对比分析表明,NIVIM 超越了 VAE、GAIN、K-NN 和 MICE 等现有模型,在六个基准数据集上实现了统计意义上的显著改进,平均 RMSE 提高了 11.05% 到 17.72%,MAE 提高了 9.62% 到 21.96%。我们提出的 CFS-Meta 方法兼顾了全局优化和局部搜索技术,大大提高了预测能力。为了评估 CFS-Meta 模型的效率,我们将其与单一特征选择模型和混合特征选择模型进行了比较,结果表明,CFS-Meta 模型的 MSE 降低了 25.61%。此外,与混合 PSO-SA 模型相比,提议的 CFS-Meta 模型实现了 10%(MAE)的改进;与混合 ABC-SA 模型相比,实现了 11.38%(MAE)的改进;与混合 Tabu-GA 模型和混合 ACO-COA 模型相比,分别实现了 12.42% 和 12.703%(MAE)的改进。我们的第三种方法提出了一种集合努力估算(EEE)模型,该模型通过动态权重调整堆叠组合(DWSC)规则合并了多种独立模型。通过对国际基准和行业数据集的测试,HEM 方法将独立模型平均改进了 21.8%(Pred()),将同质集合模型平均改进了 15%(Pred())。这种全面的方法强调了我们的模型通过先进的预测建模为推进软件项目管理(SPM)所做的贡献,为软件工程工作量估算设定了新的基准。
{"title":"Framework to improve software effort estimation accuracy using novel ensemble rule","authors":"Syed Sarmad Ali ,&nbsp;Jian Ren ,&nbsp;Ji Wu","doi":"10.1016/j.jksuci.2024.102189","DOIUrl":"10.1016/j.jksuci.2024.102189","url":null,"abstract":"&lt;div&gt;&lt;div&gt;This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the &lt;em&gt;Novel Incomplete Value Imputation Model (NIVIM)&lt;/em&gt;, a hybrid model using &lt;em&gt;Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)&lt;/em&gt;, and the &lt;em&gt;Heterogeneous Ensemble Model (HEM)&lt;/em&gt;. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from &lt;em&gt;11.05%&lt;/em&gt; to &lt;em&gt;17.72%&lt;/em&gt; and MAE improvements from &lt;em&gt;9.62%&lt;/em&gt; to &lt;em&gt;21.96%&lt;/em&gt;. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a &lt;em&gt;25.61%&lt;/em&gt; reduction in MSE. Additionally, the proposed CFS-Meta achieves a &lt;em&gt;10%&lt;/em&gt; (MAE) improvement against the hybrid PSO-SA model, an &lt;em&gt;11.38%&lt;/em&gt; (MAE) improvement compared to the Hybrid ABC-SA model, and &lt;em&gt;12.42%&lt;/em&gt; and &lt;em&gt;12.703%&lt;/em&gt; (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of &lt;em&gt;21.8%&lt;/em&gt; (Pred()) and the homogeneous ensemble model by &lt;em&gt;15%&lt;/em&gt; (Pred()). This","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102189"},"PeriodicalIF":5.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1