Pub Date : 2024-09-28DOI: 10.1016/j.jksuci.2024.102201
Abdulaziz Alhumam, Shakeel Ahmed
In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software entities would work together to develop software, often across organizations and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective. The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.
{"title":"Software requirement engineering over the federated environment in distributed software development process","authors":"Abdulaziz Alhumam, Shakeel Ahmed","doi":"10.1016/j.jksuci.2024.102201","DOIUrl":"10.1016/j.jksuci.2024.102201","url":null,"abstract":"<div><div>In the recent past, the distributed software development (DSD) process has become increasingly prevalent with the rapid evolution of the software development process. This transformation would necessitate a robust framework for software requirement engineering (SRE) to work in federated environments. Using the federated environment, multiple independent software<!--> <!-->entities would<!--> <!-->work together to develop software, often across organizations<!--> <!-->and geographical borders. The decentralized structure of the federated architecture makes requirement elicitation, analysis, specification, validation, and administration more effective.<!--> <!-->The proposed model emphasizes flexibility and agility, leveraging the collaboration of multiple localized models within a diversified development framework. This collaborative approach is designed to integrate the strengths of each local process, ultimately resulting in the creation of a robust software prototype. The performance of the proposed DSD model is evaluated using two case studies on the E-Commerce website and the Learning Management system. The proposed model is analyzed by considering divergent functional and non-functional requirements for each of the case studies and analyzing the performance using standardized metrics like mean square error (MSE), mean absolute error (MAE), and Pearson Correlation Coefficient (PCC). It is observed that the proposed model exhibited a reasonable performance with an MSE value of 0.12 and 0.153 for both functional and non-functional requirements, respectively, and an MAE value of 0.222 and 0.232 for both functional and non-functional requirements, respectively.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102201"},"PeriodicalIF":5.2,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.jksuci.2024.102198
Jingwen Tang , Huicheng Lai , Guxue Gao , Tongguan Wang
In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the and by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at https://github.com/1tangbao/PFEL.
{"title":"PFEL-Net: A lightweight network to enhance feature for multi-scale pedestrian detection","authors":"Jingwen Tang , Huicheng Lai , Guxue Gao , Tongguan Wang","doi":"10.1016/j.jksuci.2024.102198","DOIUrl":"10.1016/j.jksuci.2024.102198","url":null,"abstract":"<div><div>In the context of intelligent community research, pedestrian detection is an important and challenging object detection task. The diversity in pedestrian target scales and the interference from the surrounding background can result in incorrect and missed detections by the detector, while a large algorithm model can pose challenges for deploying the detector. In response to these issues, this work presents a pedestrian feature enhancement lightweight network (PFEL-Net), which provides the possibility for edge computing and accurate detection of multi-scale pedestrian targets in complex scenes. Firstly, a parallel dilated residual module is designed to expand the receptive field for obtaining richer pedestrian features; then, the selective bidirectional diffusion pyramid network is devised to finely fuse features, and a detail feature layer captures multi-scale information; after that, the lightweight shared detection head is constructed to lightweight the model head; finally, the channel pruning algorithm is employed to further reduce the computational complexity and size of the improved model without compromising accuracy. On the CityPersons dataset, compared to YOLOv8, PFEL-Net increases the <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span> and <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn><mo>:</mo><mn>95</mn></mrow></msub></mrow></math></span> by 6.3% and 4.9%, respectively, reduces the number of model parameters by 89% and compresses the model size by 85%, resulting in a mere 0.9 MB. Similarly, excellent performance is achieved on the TinyPerson dataset. The source code is available at <span><span>https://github.com/1tangbao/PFEL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102198"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.jksuci.2024.102196
Xi Liu , Jun Liu
Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.
{"title":"A truthful randomized mechanism for task allocation with multi-attributes in mobile edge computing","authors":"Xi Liu , Jun Liu","doi":"10.1016/j.jksuci.2024.102196","DOIUrl":"10.1016/j.jksuci.2024.102196","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) aims at decreasing the response time and energy consumption of running mobile applications by offloading the tasks of mobile devices (MDs) to the MEC servers located at the edge of the network. The demands are multi-attribute, where the distances between MDs and access points lead to differences in required resources and transmission energy consumption. Unfortunately, the existing works have not considered both task allocation and energy consumption problems. Motivated by this, this paper considers the problem of task allocation with multi-attributes, where the problem consists of the winner determination and offloading decision problems. First, the problem is formulated as the auction-based model to provide flexible service. Then, a randomized mechanism is designed and is truthful in expectation. This drives the system into an equilibrium where no MD has incentives to increase the utility by declaring an untrue value. In addition, an approximation algorithm is proposed to minimize remote energy consumption and is a polynomial-time approximation scheme. Therefore, it achieves a tradeoff between optimality loss and time complexity. Simulation results reveal that the proposed mechanism gets the near-optimal allocation. Furthermore, compared with the baseline methods, the proposed mechanism can effectively increase social welfare and bring higher revenue to edge server providers.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102196"},"PeriodicalIF":5.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.jksuci.2024.102190
Xiaoyu Cai , Zimu Li , Jiajia Dai , Liang Lv , Bo Peng
This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.
{"title":"Flow prediction of mountain cities arterial road network for real-time regulation","authors":"Xiaoyu Cai , Zimu Li , Jiajia Dai , Liang Lv , Bo Peng","doi":"10.1016/j.jksuci.2024.102190","DOIUrl":"10.1016/j.jksuci.2024.102190","url":null,"abstract":"<div><div>This study aims to enhance the understanding of vehicle path selection behavior within arterial road networks by investigating the influencing factors and analyzing spatial and temporal traffic flow distributions. Using radio frequency identification (RFID) travel data, key factors such as travel duration, route familiarity, route length, expressway ratio, arterial road ratio, and ramp ratio were identified. We then proposed an origin–destination path acquisition method and developed a route-selection prediction model based on a multinomial logit model with sample weights. Additionally, the study linked the traffic control scheme with travel time using the Bureau of Public Roads function—a model that illustrates the relationship between network-wide travel time and traffic demand—and developed an arterial road network traffic forecasting model. Verification showed that the prediction accuracy of the improved multinomial logit model increased from 92.55 % to 97.87 %. Furthermore, reducing the green time ratio for multilane merging from 0.75 to 0.5 significantly decreased the likelihood of vehicles choosing this route and reduced the number of vehicles passing through the ramp. The flow prediction model achieved a 97.9 % accuracy, accurately reflecting actual volume changes and ensuring smooth operation of the main airport road. This provides a strong foundation for developing effective traffic control plans.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102190"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.jksuci.2024.102195
Mousa Tayseer Jafar , Lu-Xing Yang , Gang Li , Xiaofan Yang
Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.
{"title":"The evolution of the flip-it game in cybersecurity: Insights from the past to the future","authors":"Mousa Tayseer Jafar , Lu-Xing Yang , Gang Li , Xiaofan Yang","doi":"10.1016/j.jksuci.2024.102195","DOIUrl":"10.1016/j.jksuci.2024.102195","url":null,"abstract":"<div><div>Cybercrime statistics highlight the severe and growing impact of digital threats on individuals and organizations, with financial losses escalating rapidly. As cybersecurity becomes a central challenge, several modern cyber defense strategies prove insufficient for effectively countering the threats posed by sophisticated attackers. Despite advancements in cybersecurity, many existing frameworks often lack the capacity to address the evolving tactics of adept adversaries. With cyber threats growing in sophistication and diversity, there is a growing acknowledgment of the shortcomings within current defense strategies, underscoring the need for more robust and innovative solutions. To develop resilient cyber defense strategies, it remains essential to simulate the dynamic interaction between sophisticated attackers and system defenders. Such simulations enable organizations to anticipate and effectively counter emerging threats. The Flip-It game is recognized as an intelligent simulation game for capturing the dynamic interplay between sophisticated attackers and system defenders. It provides the capability to emulate intricate cyber scenarios, allowing organizations to assess their defensive capabilities against evolving threats, analyze vulnerabilities, and improve their response strategies by simulating real-world cyber scenarios. This paper provides a comprehensive analysis of the Flip-It game in the context of cybersecurity, tracing its development from inception to future prospects. It highlights significant contributions and identifies potential future research avenues for scholars in the field. This study aims to deliver a thorough understanding of the Flip-It game’s progression, serving as a valuable resource for researchers and practitioners involved in cybersecurity strategy and defense mechanisms.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102195"},"PeriodicalIF":5.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.jksuci.2024.102189
Syed Sarmad Ali , Jian Ren , Ji Wu
<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This
{"title":"Framework to improve software effort estimation accuracy using novel ensemble rule","authors":"Syed Sarmad Ali , Jian Ren , Ji Wu","doi":"10.1016/j.jksuci.2024.102189","DOIUrl":"10.1016/j.jksuci.2024.102189","url":null,"abstract":"<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102189"},"PeriodicalIF":5.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.1016/j.jksuci.2024.102193
Heqi Gao , Jiayi Zhang , Guijuan Zhang , Chengming Zhang , Zena Tian , Dianjie Lu
When emergencies occur, panic spreads quickly across cyberspace and physical space. Despite widespread attention to emotional contagion in cyber–physical societies (CPS), existing studies often overlook individual relationship heterogeneity, which results in imprecise models. To address this issue, we propose a heterogeneous emotional contagion method for CPS. First, we introduce the Strong–Weak Emotional Contagion Model (SW-ECM) to simulate the heterogeneous emotional contagion process in CPS. Second, we formulate the mean-field equations for the SW-ECM to accurately capture the dynamic evolution of heterogeneous emotional contagion in the CPS. Finally, we construct a small-world network based on strong–weak relationships to validate the effectiveness of our method. The experimental results show that our method can effectively simulate the heterogeneous emotional contagion and capture changes in relationships between individuals, providing valuable guidance for crowd evacuations prone to emotional contagion.
{"title":"Heterogeneous emotional contagion of the cyber–physical society","authors":"Heqi Gao , Jiayi Zhang , Guijuan Zhang , Chengming Zhang , Zena Tian , Dianjie Lu","doi":"10.1016/j.jksuci.2024.102193","DOIUrl":"10.1016/j.jksuci.2024.102193","url":null,"abstract":"<div><p>When emergencies occur, panic spreads quickly across cyberspace and physical space. Despite widespread attention to emotional contagion in cyber–physical societies (CPS), existing studies often overlook individual relationship heterogeneity, which results in imprecise models. To address this issue, we propose a heterogeneous emotional contagion method for CPS. First, we introduce the Strong–Weak Emotional Contagion Model (SW-ECM) to simulate the heterogeneous emotional contagion process in CPS. Second, we formulate the mean-field equations for the SW-ECM to accurately capture the dynamic evolution of heterogeneous emotional contagion in the CPS. Finally, we construct a small-world network based on strong–weak relationships to validate the effectiveness of our method. The experimental results show that our method can effectively simulate the heterogeneous emotional contagion and capture changes in relationships between individuals, providing valuable guidance for crowd evacuations prone to emotional contagion.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102193"},"PeriodicalIF":5.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002829/pdfft?md5=f933d896a76a94be422b19df9a07b8ff&pid=1-s2.0-S1319157824002829-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1016/j.jksuci.2024.102192
Jin Tao , Jianing Wei , Hongjuan Zhou , Fanyi Meng , Yingchun Li , Chenxu Wang , Zhiquan Zhou
Accurate prediction of short-term sea surface wind speed is essential for maritime safety and coastal management. Most conventional studies encounter challenges simply in analyzing raw wind speed sequences and extracting multiscale features directly from the original received data, which result in lower efficiency. In this paper, an enhanced hybrid model based on a novel data assemble method for original received data, a multiscale feature extraction and selection approach, and a predictive network, is proposed for accurate and efficient short-term sea surface wind speed forecasting. Firstly, the received original data including wind speed are assembled into correlation matrices in order to uncover inherent associations over varied time spans. Secondly a novel Multiscale Wind-speed Feature-Enhanced Convolutional Network (MW-FECN) is designed for efficient and selective multiscale feature extraction, which can capture comprehensive characteristics. Thirdly, a Random Forest Feature Selection (RF-FS) is employed to pinpoint crucial characteristics for enhanced prediction of wind speed with higher efficiency than the related works. Finally, the proposed hybrid model utilized a Bidirectional Long Short-Term Memory (BiLSTM) network to achieve the accurate prediction of wind speed. Experimental data are collected in Weihai sea area, and a case study consist of five benchmarks and three ablation models is conducted to assess the proposed hybrid model. Compared with the conventional methods, experiment results illustrate the effectiveness of the proposed hybrid model and demonstrate effective balancing prediction accuracy and computational time. The proposed hybrid model achieves up to a 28.45% MAE and 27.27% RMSE improvement over existing hybrid models.
准确预测短期海面风速对海上安全和海岸管理至关重要。大多数传统研究仅在分析原始风速序列和直接从原始接收数据中提取多尺度特征方面遇到挑战,导致效率较低。本文提出了一种基于新颖的原始接收数据组装方法、多尺度特征提取和选择方法以及预测网络的增强型混合模型,用于准确高效的短期海面风速预报。首先,将接收到的包括风速在内的原始数据组装成相关矩阵,以发现不同时间跨度上的内在联系。其次,设计了一种新颖的多尺度风速特征增强卷积网络(MW-FECN),用于高效、有选择性地提取多尺度特征,从而捕捉综合特征。第三,采用随机森林特征选择(RF-FS)来精确定位关键特征,以提高风速预测的效率。最后,所提出的混合模型利用双向长短期记忆(BiLSTM)网络实现了风速的精确预测。在威海海域收集了实验数据,并进行了由五个基准和三个消融模型组成的案例研究,以评估所提出的混合模型。与传统方法相比,实验结果表明了所提出的混合模型的有效性,并有效地平衡了预测精度和计算时间。与现有的混合模型相比,所提出的混合模型的 MAE 和 RMSE 分别提高了 28.45% 和 27.27%。
{"title":"Enhanced prediction model of short-term sea surface wind speed: A multiscale feature extraction and selection approach coupled with deep learning technique","authors":"Jin Tao , Jianing Wei , Hongjuan Zhou , Fanyi Meng , Yingchun Li , Chenxu Wang , Zhiquan Zhou","doi":"10.1016/j.jksuci.2024.102192","DOIUrl":"10.1016/j.jksuci.2024.102192","url":null,"abstract":"<div><div>Accurate prediction of short-term sea surface wind speed is essential for maritime safety and coastal management. Most conventional studies encounter challenges simply in analyzing raw wind speed sequences and extracting multiscale features directly from the original received data, which result in lower efficiency. In this paper, an enhanced hybrid model based on a novel data assemble method for original received data, a multiscale feature extraction and selection approach, and a predictive network, is proposed for accurate and efficient short-term sea surface wind speed forecasting. Firstly, the received original data including wind speed are assembled into correlation matrices in order to uncover inherent associations over varied time spans. Secondly a novel Multiscale Wind-speed Feature-Enhanced Convolutional Network (MW-FECN) is designed for efficient and selective multiscale feature extraction, which can capture comprehensive characteristics. Thirdly, a Random Forest Feature Selection (RF-FS) is employed to pinpoint crucial characteristics for enhanced prediction of wind speed with higher efficiency than the related works. Finally, the proposed hybrid model utilized a Bidirectional Long Short-Term Memory (BiLSTM) network to achieve the accurate prediction of wind speed. Experimental data are collected in Weihai sea area, and a case study consist of five benchmarks and three ablation models is conducted to assess the proposed hybrid model. Compared with the conventional methods, experiment results illustrate the effectiveness of the proposed hybrid model and demonstrate effective balancing prediction accuracy and computational time. The proposed hybrid model achieves up to a 28.45% MAE and 27.27% RMSE improvement over existing hybrid models.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102192"},"PeriodicalIF":5.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1016/j.jksuci.2024.102187
Liying Zhao , Chao Liu , Entie Qi , Sinan Shi
Mobile edge processing is a cutting-edge technique that addresses the limitations of mobile devices by enabling users to offload computational tasks to edge servers, rather than relying on distant cloud servers. This approach significantly reduces the latency associated with cloud processing, thereby enhancing the quality of service. In this paper, we propose a system in which a cellular network, comprising multiple users, interacts with both cloud and edge servers to process service requests. The system assumes non-orthogonal multiple access (NOMA) for user access to the radio spectrum. We model the interactions between users and servers using queuing theory, aiming to minimize the total energy consumption of users, service delivery time, and overall network operation costs. The problem is mathematically formulated as a multi-objective, bounded non-convex optimization problem. The Structural Correspondence Analysis (SCA) method is employed to obtain the global optimal solution. Simulation results demonstrate that the proposed model reduces energy consumption, delay, and network costs by approximately 50%, under the given assumptions.
{"title":"Multi-objective optimization in order to allocate computing and telecommunication resources based on non-orthogonal access, participation of cloud server and edge server in 5G networks","authors":"Liying Zhao , Chao Liu , Entie Qi , Sinan Shi","doi":"10.1016/j.jksuci.2024.102187","DOIUrl":"10.1016/j.jksuci.2024.102187","url":null,"abstract":"<div><div>Mobile edge processing is a cutting-edge technique that addresses the limitations of mobile devices by enabling users to offload computational tasks to edge servers, rather than relying on distant cloud servers. This approach significantly reduces the latency associated with cloud processing, thereby enhancing the quality of service. In this paper, we propose a system in which a cellular network, comprising multiple users, interacts with both cloud and edge servers to process service requests. The system assumes non-orthogonal multiple access (NOMA) for user access to the radio spectrum. We model the interactions between users and servers using queuing theory, aiming to minimize the total energy consumption of users, service delivery time, and overall network operation costs. The problem is mathematically formulated as a multi-objective, bounded non-convex optimization problem. The Structural Correspondence Analysis (SCA) method is employed to obtain the global optimal solution. Simulation results demonstrate that the proposed model reduces energy consumption, delay, and network costs by approximately 50%, under the given assumptions.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102187"},"PeriodicalIF":5.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navigating through a tactile paved footpath surrounded by various sizes of static and dynamic obstacles is one of the biggest impediments visually impaired people face, especially in Dhaka, Bangladesh. This problem is important to address, considering the number of accidents in such densely populated footpaths. We propose a novel deep-edge solution using Computer Vision to make people aware of the obstacles in the vicinity and reduce the necessity of a walking cane. This study introduces a diverse novel tactile footpath dataset of Dhaka covering different city areas. Additionally, existing state-of-the-art deep neural networks for object detection have been fine-tuned and investigated using this dataset. A heuristic-based breadth-first navigation algorithm (HBFN) is developed to provide navigation directions that are safe and obstacle-free, which is then deployed in a smartphone application that automatically captures images of the footpath ahead to provide real-time navigation guidance delivered by speech. The findings from this study demonstrate the effectiveness of the object detection model, YOLOv8s, which outperformed other benchmark models on this dataset, achieving a high mAP of 0.974 and an F1 score of 0.934. The model’s performance is analyzed after quantization, reducing its size by 49.53% while retaining 98.97% of the original mAP.
{"title":"A novel edge intelligence-based solution for safer footpath navigation of visually impaired using computer vision","authors":"Rashik Iram Chowdhury, Jareen Anjom, Md. Ishan Arefin Hossain","doi":"10.1016/j.jksuci.2024.102191","DOIUrl":"10.1016/j.jksuci.2024.102191","url":null,"abstract":"<div><p>Navigating through a tactile paved footpath surrounded by various sizes of static and dynamic obstacles is one of the biggest impediments visually impaired people face, especially in Dhaka, Bangladesh. This problem is important to address, considering the number of accidents in such densely populated footpaths. We propose a novel deep-edge solution using Computer Vision to make people aware of the obstacles in the vicinity and reduce the necessity of a walking cane. This study introduces a diverse novel tactile footpath dataset of Dhaka covering different city areas. Additionally, existing state-of-the-art deep neural networks for object detection have been fine-tuned and investigated using this dataset. A heuristic-based breadth-first navigation algorithm (HBFN) is developed to provide navigation directions that are safe and obstacle-free, which is then deployed in a smartphone application that automatically captures images of the footpath ahead to provide real-time navigation guidance delivered by speech. The findings from this study demonstrate the effectiveness of the object detection model, YOLOv8s, which outperformed other benchmark models on this dataset, achieving a high mAP of 0.974 and an F1 score of 0.934. The model’s performance is analyzed after quantization, reducing its size by 49.53% while retaining 98.97% of the original mAP.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102191"},"PeriodicalIF":5.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002805/pdfft?md5=67af390c0280c8b6ae2c05684fbae69f&pid=1-s2.0-S1319157824002805-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}