Pub Date : 2024-10-01DOI: 10.1016/j.jksuci.2024.102194
Qingzeng Song , Yao Dai , Hao Lu , Guanghao Jin
In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a improvement in energy efficiency compared to the CPU, a improvement compared to the GPU, and a to improvement compared to existing accelerators regarding speed and energy efficiency.
{"title":"High-throughput systolic array-based accelerator for hybrid transformer-CNN networks","authors":"Qingzeng Song , Yao Dai , Hao Lu , Guanghao Jin","doi":"10.1016/j.jksuci.2024.102194","DOIUrl":"10.1016/j.jksuci.2024.102194","url":null,"abstract":"<div><div>In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a <span><math><mrow><mn>12</mn><mo>×</mo></mrow></math></span> improvement in energy efficiency compared to the CPU, a <span><math><mrow><mn>2</mn><mo>.</mo><mn>3</mn><mo>×</mo></mrow></math></span> improvement compared to the GPU, and a <span><math><mrow><mn>1</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> to <span><math><mrow><mn>2</mn><mo>×</mo></mrow></math></span> improvement compared to existing accelerators regarding speed and energy efficiency.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102194"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at https://github.com/ginsengf/SCAN.
建立特征之间的长程依赖关系模型已成为改善单图像超分辨率(SISR)结果的共识,这激发了人们对扩大卷积神经网络(CNN)内核大小的兴趣。虽然增大内核肯定会提高网络性能,但网络参数和计算复杂度也会大幅提高。因此,需要对内核大小的设置进行优化,以提高网络的效率。在这项工作中,我们研究了较大内核的位置对网络性能的影响,并提出了一种可扩展的注意力网络(SCAN)。在 SCAN 中,我们提出了一种深度相关注意力块(DRAB),它由多个多尺度信息增强块(MIEB)和可调整大小的内核注意力块(RKAB)组成。RKAB 可根据 DRAB 在网络中的位置动态调整内核大小。这种可调整大小的机制允许网络在较浅的层中用较大的内核提取更多的信息特征,而在较深的层中用较小的内核关注有用的信息,从而有效地改善了 SR 结果。大量实验证明,所提出的 SCAN 优于其他最先进的轻量级 SR 方法。我们的代码见 https://github.com/ginsengf/SCAN。
{"title":"A scalable attention network for lightweight image super-resolution","authors":"Jinsheng Fang , Xinyu Chen , Jianglong Zhao , Kun Zeng","doi":"10.1016/j.jksuci.2024.102185","DOIUrl":"10.1016/j.jksuci.2024.102185","url":null,"abstract":"<div><div>Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at <span><span>https://github.com/ginsengf/SCAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102185"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jksuci.2024.102197
Zhiyuan Zou , Bangchao Wang , Xinrong Hu , Yang Deng , Hongyan Wan , Huan Jin
This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of ¡0.01 and demonstrates exceptional performance and stability across various training data scales.
本研究针对需求到代码的可追溯性所面临的挑战,提出了一种新的模型--代码依赖性遗传算法-XGBoost(GA-XWCoDe),该模型集成了 eXtreme Gradient Boosting(XGBoost)、Node2Vec 模型加权代码依赖性策略和参数优化遗传算法。XGBoost 可减轻过度拟合并增强模型稳定性,而 Node2Vec 则可提高低置信度链接的预测准确性。遗传算法用于有效优化模型参数,降低了传统方法的资源强度。实验结果表明,就四个数据集的平均 F1 性能而言,GA-XWCoDe 比最先进的 TRAceability lInk cLassifier(TRAIL)方法高出 17.44%,比需求可追溯性深林(DF4RT)方法高出 33.36%。在置信度为 α¡0.01 时,它明显优于所有基线方法,并在各种训练数据规模下表现出卓越的性能和稳定性。
{"title":"Enhancing requirements-to-code traceability with GA-XWCoDe: Integrating XGBoost, Node2Vec, and genetic algorithms for improving model performance and stability","authors":"Zhiyuan Zou , Bangchao Wang , Xinrong Hu , Yang Deng , Hongyan Wan , Huan Jin","doi":"10.1016/j.jksuci.2024.102197","DOIUrl":"10.1016/j.jksuci.2024.102197","url":null,"abstract":"<div><div>This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of <span><math><mi>α</mi></math></span>¡0.01 and demonstrates exceptional performance and stability across various training data scales.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102197"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.jksuci.2024.102199
Antonio Cedillo-Hernandez , Lydia Velazquez-Garcia , Manuel Cedillo-Hernandez , David Conchouso-Gonzalez
Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.
{"title":"Fast and robust JND-guided video watermarking scheme in spatial domain","authors":"Antonio Cedillo-Hernandez , Lydia Velazquez-Garcia , Manuel Cedillo-Hernandez , David Conchouso-Gonzalez","doi":"10.1016/j.jksuci.2024.102199","DOIUrl":"10.1016/j.jksuci.2024.102199","url":null,"abstract":"<div><div>Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102199"},"PeriodicalIF":5.2,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-28DOI: 10.1016/j.jksuci.2024.102201
Abdulaziz Alhumam, Shakeel Ahmed
In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software entities would work together to develop software, often across organizations and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective. The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.
{"title":"Software requirement engineering over the federated environment in distributed software development process","authors":"Abdulaziz Alhumam, Shakeel Ahmed","doi":"10.1016/j.jksuci.2024.102201","DOIUrl":"10.1016/j.jksuci.2024.102201","url":null,"abstract":"<div><div>In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software<!--> <!-->entities would<!--> <!-->work together to develop software, often across organizations<!--> <!-->and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective.<!--> <!-->The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102201"},"PeriodicalIF":5.2,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.jksuci.2024.102198
Jingwen Tang , Huicheng Lai , Guxue Gao , Tongguan Wang
In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the and by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at https://github.com/1tangbao/PFEL.
{"title":"PFEL-Net: A lightweight network to enhance feature for multi-scale pedestrian detection","authors":"Jingwen Tang , Huicheng Lai , Guxue Gao , Tongguan Wang","doi":"10.1016/j.jksuci.2024.102198","DOIUrl":"10.1016/j.jksuci.2024.102198","url":null,"abstract":"<div><div>In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span> and <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn><mo>:</mo><mn>95</mn></mrow></msub></mrow></math></span> by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at <span><span>https://github.com/1tangbao/PFEL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102198"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.jksuci.2024.102196
Xi Liu , Jun Liu
Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.
{"title":"A truthful randomized mechanism for task allocation with multi-attributes in mobile edge computing","authors":"Xi Liu , Jun Liu","doi":"10.1016/j.jksuci.2024.102196","DOIUrl":"10.1016/j.jksuci.2024.102196","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102196"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.jksuci.2024.102190
Xiaoyu Cai , Zimu Li , Jiajia Dai , Liang Lv , Bo Peng
This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.
{"title":"Flow prediction of mountain cities arterial road network for real-time regulation","authors":"Xiaoyu Cai , Zimu Li , Jiajia Dai , Liang Lv , Bo Peng","doi":"10.1016/j.jksuci.2024.102190","DOIUrl":"10.1016/j.jksuci.2024.102190","url":null,"abstract":"<div><div>This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102190"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.jksuci.2024.102195
Mousa Tayseer Jafar , Lu-Xing Yang , Gang Li , Xiaofan Yang
Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.
{"title":"The evolution of the flip-it game in cybersecurity: Insights from the past to the future","authors":"Mousa Tayseer Jafar , Lu-Xing Yang , Gang Li , Xiaofan Yang","doi":"10.1016/j.jksuci.2024.102195","DOIUrl":"10.1016/j.jksuci.2024.102195","url":null,"abstract":"<div><div>Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102195"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.jksuci.2024.102189
Syed Sarmad Ali , Jian Ren , Ji Wu
<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This
{"title":"Framework to improve software effort estimation accuracy using novel ensemble rule","authors":"Syed Sarmad Ali , Jian Ren , Ji Wu","doi":"10.1016/j.jksuci.2024.102189","DOIUrl":"10.1016/j.jksuci.2024.102189","url":null,"abstract":"<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102189"},"PeriodicalIF":5.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}