Pub Date : 2024-03-09DOI: 10.1007/s00607-024-01271-4
Shaifali Malukani, C. K. Bhensdadia
Fog computing has emerged as a decentralized computing paradigm that extends cloud services to the network edge, enabling faster data processing and real-time applications. The increasing popularity of fog computing has led to the emergence of a potential market involving users and providers of fog resources. However, both parties are driven by self-interest and seek to maximize their utility, giving rise to multiple conflicts extending beyond mere price considerations. Negotiations can play a crucial role in resolving conflicts and establishing mutually beneficial service level agreements. Moreover, in the heterogeneous fog environment, quality of service attributes, such as throughput, delay, trust, power dissipation, etc., vary significantly among different user-fog associations. These attributes, although non-negotiable, hold great importance for entities and directly influence partner selection. Entities may exhibit a preference for one another based on these non-negotiable attributes. To the best of our knowledge, no existing literature specifically addresses the issue of associating with a preferred trading partner at a negotiated value for multiple issues in the fog environment. This research aims to address this gap and provide insights into this unexplored area. This work presents a novel Preference-based Muti-Issue Negotiation Algorithm, PMINA, for many to many, bilateral and concurrent negotiations in the fog environment. The results confirm the significance of PMINA, demonstrating a substantial enhancement in user and fog utilities.
{"title":"Preference based multi-issue negotiation algorithm (PMINA) for fog resource allocation","authors":"Shaifali Malukani, C. K. Bhensdadia","doi":"10.1007/s00607-024-01271-4","DOIUrl":"https://doi.org/10.1007/s00607-024-01271-4","url":null,"abstract":"<p>Fog computing has emerged as a decentralized computing paradigm that extends cloud services to the network edge, enabling faster data processing and real-time applications. The increasing popularity of fog computing has led to the emergence of a potential market involving users and providers of fog resources. However, both parties are driven by self-interest and seek to maximize their utility, giving rise to multiple conflicts extending beyond mere price considerations. Negotiations can play a crucial role in resolving conflicts and establishing mutually beneficial service level agreements. Moreover, in the heterogeneous fog environment, quality of service attributes, such as throughput, delay, trust, power dissipation, etc., vary significantly among different user-fog associations. These attributes, although non-negotiable, hold great importance for entities and directly influence partner selection. Entities may exhibit a preference for one another based on these non-negotiable attributes. To the best of our knowledge, no existing literature specifically addresses the issue of associating with a preferred trading partner at a negotiated value for multiple issues in the fog environment. This research aims to address this gap and provide insights into this unexplored area. This work presents a novel Preference-based Muti-Issue Negotiation Algorithm, PMINA, for many to many, bilateral and concurrent negotiations in the fog environment. The results confirm the significance of PMINA, demonstrating a substantial enhancement in user and fog utilities.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"21 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s00607-024-01265-2
Abstract
The detection of influential individuals in social networks is called influence maximization which has many applications in advertising and marketing. Several factors including propagation delay affect the degree to which an individual influences the network. Many different methods, including centrality measures, identify high-influence individuals in social networks. The time-sensitive harmonic method (TSHarmonic), which considers time sensitivity to propagation delay and duration, is a centrality measure. TSHarmonic has two weaknesses: high computational complexity and ignoring the influence of the selected node in selecting other influential nodes. Therefore, in this article, the valuable path-finding process in the TSHarmonic method is modified to provide the Fast Time-Sensitive Harmonic algorithm. The provided method has the same accuracy as the TSHarmonic, while the speed is significantly increased. Then, the Time-Sensitive Propagation Values Discount method is proposed to improve detection speed and accuracy. This method takes into account the influence of the selected node for future selection and hence increases the accuracy.
{"title":"Time-sensitive propagation values discount centrality measure","authors":"","doi":"10.1007/s00607-024-01265-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01265-2","url":null,"abstract":"<h3>Abstract</h3> <p>The detection of influential individuals in social networks is called influence maximization which has many applications in advertising and marketing. Several factors including propagation delay affect the degree to which an individual influences the network. Many different methods, including centrality measures, identify high-influence individuals in social networks. The time-sensitive harmonic method (TSHarmonic), which considers time sensitivity to propagation delay and duration, is a centrality measure. TSHarmonic has two weaknesses: high computational complexity and ignoring the influence of the selected node in selecting other influential nodes. Therefore, in this article, the valuable path-finding process in the TSHarmonic method is modified to provide the Fast Time-Sensitive Harmonic algorithm. The provided method has the same accuracy as the TSHarmonic, while the speed is significantly increased. Then, the Time-Sensitive Propagation Values Discount method is proposed to improve detection speed and accuracy. This method takes into account the influence of the selected node for future selection and hence increases the accuracy.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"89 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140037128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-02DOI: 10.1007/s00607-024-01267-0
Mohammad Ali Monshizadeh Naeen, Hamid Reza Ghaffari, Hossein Monshizadeh Naeen
Cloud data centers face various challenges, such as high energy consumption, environmental impact, and quality of service (QoS) requirements. Dynamic virtual machine (VM) consolidation is an effective approach to address these challenges, but it is a complex optimization problem that involves trade-offs between energy efficiency and QoS satisfaction. Moreover, the workload patterns in cloud data centers are often non-stationary and unpredictable, which makes it difficult to model them. In this paper, we propose a new method for dynamic VM consolidation that optimizes both energy efficiency and QoS objectives. Our approach is based on Markov chains and the artificial feeding birds (AFB) algorithm. Markov chains are used to model the resource utilization of each individual VM and PM based on the changes that happen in workload data. AFB algorithm is a metaheuristic optimization technique that mimics the behavior of birds in nature. We modify the AFB algorithm to suit the characteristics of the VM placement problem and to provide QoS-aware and energy-efficient solutions. Our approach also employs an online step detection method to capture variations in workload patterns. Furthermore, we introduce a new policy for VM selection from overloaded hosts, which considers the abrupt changes in the utilization processes of the VMs. The proposed algorithms are evaluated extensively using the CloudSim Toolkit with real workload data. The proposed system outperforms evaluation policies in multiple metrics, including energy consumption, SLA violations, and other essential metrics.
{"title":"Cloud data center cost management using virtual machine consolidation with an improved artificial feeding birds algorithm","authors":"Mohammad Ali Monshizadeh Naeen, Hamid Reza Ghaffari, Hossein Monshizadeh Naeen","doi":"10.1007/s00607-024-01267-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01267-0","url":null,"abstract":"<p>Cloud data centers face various challenges, such as high energy consumption, environmental impact, and quality of service (QoS) requirements. Dynamic virtual machine (VM) consolidation is an effective approach to address these challenges, but it is a complex optimization problem that involves trade-offs between energy efficiency and QoS satisfaction. Moreover, the workload patterns in cloud data centers are often non-stationary and unpredictable, which makes it difficult to model them. In this paper, we propose a new method for dynamic VM consolidation that optimizes both energy efficiency and QoS objectives. Our approach is based on Markov chains and the artificial feeding birds (AFB) algorithm. Markov chains are used to model the resource utilization of each individual VM and PM based on the changes that happen in workload data. AFB algorithm is a metaheuristic optimization technique that mimics the behavior of birds in nature. We modify the AFB algorithm to suit the characteristics of the VM placement problem and to provide QoS-aware and energy-efficient solutions. Our approach also employs an online step detection method to capture variations in workload patterns. Furthermore, we introduce a new policy for VM selection from overloaded hosts, which considers the abrupt changes in the utilization processes of the VMs. The proposed algorithms are evaluated extensively using the CloudSim Toolkit with real workload data. The proposed system outperforms evaluation policies in multiple metrics, including energy consumption, SLA violations, and other essential metrics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"31 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140018213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-26DOI: 10.1007/s00607-023-01255-w
Ali Riahi, Abdorreza Savadi, Mahmoud Naghibzadeh
The unknown behavior of GPUs and the differing characteristics among their generations present a serious challenge in the analysis and optimization of programs in these processors. As a result, performance models have been developed to better analyze and describe the behavior of these processors. These models help programmers to configure applications and developers to improve the performance of these devices. This paper introduces an analytical model, called Many-BSP, to predict the execution time of a CUDA kernel. This model has high portability and can easily be used on various devices. There are many GPU features and behaviors that affect performance and will be discussed, including multi-threading, coalesced access to global memory, shared memory bank conflict, dual-issue instructions, limitation of functional units, parallelism in instruction, thread and warp levels, the instruction pipeline, branch divergence, and intra-block and inter-block overlapping between communications and computations. This model also employs the tree hierarchy and parameters of the Multi-BSP model to estimate the communication latency with memory. In Many-BSP, the execution time of a kernel is predicted by static analysis of CUDA and PTX codes. The performance of the model is tested on three devices of different generations and three real-world benchmarks. The results show that the execution time of a CUDA kernel can be predicted with a maximum error of 12.33%.
GPU 的未知行为以及各代 GPU 之间的不同特性给分析和优化这些处理器中的程序带来了严峻的挑战。因此,人们开发了性能模型来更好地分析和描述这些处理器的行为。这些模型有助于程序员配置应用程序,也有助于开发人员提高这些设备的性能。本文介绍了一种名为 Many-BSP 的分析模型,用于预测 CUDA 内核的执行时间。该模型具有很高的可移植性,可轻松用于各种设备。我们将讨论影响性能的许多 GPU 特性和行为,包括多线程、全局内存的聚合访问、共享内存库冲突、双发指令、功能单元限制、指令、线程和经线级的并行性、指令流水线、分支发散以及通信和计算之间的块内和块间重叠。该模型还采用多-BSP 模型的树状层次结构和参数来估算与内存的通信延迟。在 Many-BSP 模型中,通过对 CUDA 和 PTX 代码进行静态分析来预测内核的执行时间。该模型的性能在三个不同世代的设备和三个实际基准上进行了测试。结果表明,CUDA 内核的执行时间可以预测,最大误差为 12.33%。
{"title":"Many-BSP: an analytical performance model for CUDA kernels","authors":"Ali Riahi, Abdorreza Savadi, Mahmoud Naghibzadeh","doi":"10.1007/s00607-023-01255-w","DOIUrl":"https://doi.org/10.1007/s00607-023-01255-w","url":null,"abstract":"<p>The unknown behavior of GPUs and the differing characteristics among their generations present a serious challenge in the analysis and optimization of programs in these processors. As a result, performance models have been developed to better analyze and describe the behavior of these processors. These models help programmers to configure applications and developers to improve the performance of these devices. This paper introduces an analytical model, called Many-BSP, to predict the execution time of a CUDA kernel. This model has high portability and can easily be used on various devices. There are many GPU features and behaviors that affect performance and will be discussed, including multi-threading, coalesced access to global memory, shared memory bank conflict, dual-issue instructions, limitation of functional units, parallelism in instruction, thread and warp levels, the instruction pipeline, branch divergence, and intra-block and inter-block overlapping between communications and computations. This model also employs the tree hierarchy and parameters of the Multi-BSP model to estimate the communication latency with memory. In Many-BSP, the execution time of a kernel is predicted by static analysis of CUDA and PTX codes. The performance of the model is tested on three devices of different generations and three real-world benchmarks. The results show that the execution time of a CUDA kernel can be predicted with a maximum error of 12.33%.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"213 3-4 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139967963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-24DOI: 10.1007/s00607-024-01264-3
Shuang Wang, Yibing Duan, Yamin Lei, Peng Du, Yamin Wang
Multi-workflows are commonly deployed on cloud platforms to achieve efficient computational power. Diverse task configuration requirements, the heterogeneous nature and dynamic electricity price of cloud servers impose significant challenges for economically scheduling multi-workflows. In this paper, we propose a Heuristic Electricity-cost-aware Multi-workflow Scheduling algorithm (HEMS) to search for an optimal scheduling plan which determines the optimal scheduling scheme for each task in each workflow, specifying the server to perform the task with determined resources in specific time. The objective is to minimize the total electricity cost of all servers while satisfying the deadline constraints of all workflows. The HEMS algorithm consists of five components: Workflow Scheduling Sequence Generation, Task Scheduling Sequence Initialization for each workflow, Optimal Scheduling Scheme Determination for each task, initial Task Scheduling Sequence Optimization, and Optimal Scheduling Plan Optimization. Experimental results demonstrate that HEMS consistently achieves the optimal scheduling plan with the lower total electricity cost (saving 54.5–69.1% on average) within slightly longer CPU time for various multi-workflows compared to existing three scheduling approaches.
多工作流通常部署在云平台上,以实现高效的计算能力。任务配置要求的多样性、云服务器的异构性和动态电价为经济地调度多工作流带来了巨大挑战。在本文中,我们提出了一种启发式电费感知多工作流调度算法(HEMS)来搜索最优调度方案,该方案为每个工作流中的每个任务确定最优调度方案,指定服务器在特定时间内利用确定的资源执行任务。其目标是在满足所有工作流的截止日期限制的同时,最大限度地降低所有服务器的总电费。HEMS 算法由五个部分组成:工作流调度序列生成、每个工作流的任务调度序列初始化、每个任务的最优调度方案确定、初始任务调度序列优化和最优调度计划优化。实验结果表明,与现有的三种调度方法相比,对于不同的多工作流,HEMS 可以在稍长的 CPU 时间内,以较低的总电费(平均节省 54.5-69.1%)实现最优调度方案。
{"title":"Electricity-cost-aware multi-workflow scheduling in heterogeneous cloud","authors":"Shuang Wang, Yibing Duan, Yamin Lei, Peng Du, Yamin Wang","doi":"10.1007/s00607-024-01264-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01264-3","url":null,"abstract":"<p>Multi-workflows are commonly deployed on cloud platforms to achieve efficient computational power. Diverse task configuration requirements, the heterogeneous nature and dynamic electricity price of cloud servers impose significant challenges for economically scheduling multi-workflows. In this paper, we propose a Heuristic Electricity-cost-aware Multi-workflow Scheduling algorithm (HEMS) to search for an optimal scheduling plan which determines the optimal scheduling scheme for each task in each workflow, specifying the server to perform the task with determined resources in specific time. The objective is to minimize the total electricity cost of all servers while satisfying the deadline constraints of all workflows. The HEMS algorithm consists of five components: Workflow Scheduling Sequence Generation, Task Scheduling Sequence Initialization for each workflow, Optimal Scheduling Scheme Determination for each task, initial Task Scheduling Sequence Optimization, and Optimal Scheduling Plan Optimization. Experimental results demonstrate that HEMS consistently achieves the optimal scheduling plan with the lower total electricity cost (saving 54.5–69.1% on average) within slightly longer CPU time for various multi-workflows compared to existing three scheduling approaches.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"242 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139955742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-24DOI: 10.1007/s00607-024-01263-4
Reza Akraminejad, Navid Khaledian, Amin Nazari, Marcus Voelp
Nowadays, with the rapid expansion of cloud computing technology in processing Internet of Things (IoT) workloads, the demand for data centers has significantly increased, leading to a surge in CO2 emissions, power consumption, and global warming. As a result, extensive research and initiatives have been undertaken to tackle this problem. Two specific approaches focus on enhancing workload scheduling, a complex problem known as NP-Hard, and integrating scheduling into scientific workflows. In this investigation, we present a multi-objective Crow Search Algorithm (CSA) for optimizing both makespan and costs in scientific cloud workflows (CSAMOMC). We conduct a comparative analysis between our approach and the well-known HEFT and TC3pop algorithms, which are commonly used for reducing makespan and optimizing costs. Our findings demonstrate that CSAMOMC is capable of achieving an average makespan reduction of 4.42% and a cost reduction of 4.77% when compared to the aforementioned algorithms.
{"title":"A multi-objective crow search algorithm for optimizing makespan and costs in scientific cloud workflows (CSAMOMC)","authors":"Reza Akraminejad, Navid Khaledian, Amin Nazari, Marcus Voelp","doi":"10.1007/s00607-024-01263-4","DOIUrl":"https://doi.org/10.1007/s00607-024-01263-4","url":null,"abstract":"<p>Nowadays, with the rapid expansion of cloud computing technology in processing Internet of Things (IoT) workloads, the demand for data centers has significantly increased, leading to a surge in CO<sub>2</sub> emissions, power consumption, and global warming. As a result, extensive research and initiatives have been undertaken to tackle this problem. Two specific approaches focus on enhancing workload scheduling, a complex problem known as NP-Hard, and integrating scheduling into scientific workflows. In this investigation, we present a multi-objective Crow Search Algorithm (CSA) for optimizing both makespan and costs in scientific cloud workflows (CSAMOMC). We conduct a comparative analysis between our approach and the well-known HEFT and TC3pop algorithms, which are commonly used for reducing makespan and optimizing costs. Our findings demonstrate that CSAMOMC is capable of achieving an average makespan reduction of 4.42% and a cost reduction of 4.77% when compared to the aforementioned algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"11 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139955748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-24DOI: 10.1007/s00607-024-01256-3
Lei Chen, Linyun Ma, Lvjie Li
In recent years, Sine Cosine Algorithm (SCA) is a kind of meta-heuristic optimization algorithm with simple structure, simple parameters and trigonometric function principle. It has been proved that it has good competitiveness among the existing optimization algorithms. However, the single mechanism of SCA leads to its insufficient utilization of the information of the whole population, insufficient ability to jump out of local optima and poor performance at solving complex objective function. Therefore, this paper introduces social learning strategy (SL) and elite opposition-based learning (EOBL) strategy to improve SCA, and proposes novel algorithm: enhancing Sine Cosine Algorithm based on elite opposition-based learning and social learning (ESLSCA). Social learning strategy takes full advantage of information from the entire population. The elite opposition-based learning strategy provides a possibility for the algorithm to jump out of local optima and increases the diversity of the population. To demonstrate the performance of ESLSCA, this paper uses 22 well-known benchmark test functions and CEC2019 test function set to evaluate ESLSCA. The comparisons show that the proposed ESLSCA has better performance than the standard SCA and it is very competitive among other excellent optimization algorithms.
{"title":"Enhancing sine cosine algorithm based on social learning and elite opposition-based learning","authors":"Lei Chen, Linyun Ma, Lvjie Li","doi":"10.1007/s00607-024-01256-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01256-3","url":null,"abstract":"<p>In recent years, Sine Cosine Algorithm (SCA) is a kind of meta-heuristic optimization algorithm with simple structure, simple parameters and trigonometric function principle. It has been proved that it has good competitiveness among the existing optimization algorithms. However, the single mechanism of SCA leads to its insufficient utilization of the information of the whole population, insufficient ability to jump out of local optima and poor performance at solving complex objective function. Therefore, this paper introduces social learning strategy (SL) and elite opposition-based learning (EOBL) strategy to improve SCA, and proposes novel algorithm: enhancing Sine Cosine Algorithm based on elite opposition-based learning and social learning (ESLSCA). Social learning strategy takes full advantage of information from the entire population. The elite opposition-based learning strategy provides a possibility for the algorithm to jump out of local optima and increases the diversity of the population. To demonstrate the performance of ESLSCA, this paper uses 22 well-known benchmark test functions and CEC2019 test function set to evaluate ESLSCA. The comparisons show that the proposed ESLSCA has better performance than the standard SCA and it is very competitive among other excellent optimization algorithms.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"45 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1007/s00607-024-01262-5
Jiuyun Xu, Yinyue Jiang, Hanfei Fan, Qiqi Wang
In recent years, with the deepening of cross-industry cooperation, vertical federated learning with multiple overlapping samples and fewer overlapping features has attracted extensive attention. Vertical federated learning increases the challenge of detecting Byzantine clients due to feature heterogeneity, in contrast to horizontal federated learning. Existing methods for detecting Byzantine clients can be divided into statistical-based and detection-based types. The detection-based type breaks the limit on the number of Byzantine clients. To our knowledge, current research in vertical federated learning relies on the assumption of a reliable third-party coordinator and is based on statistical type. In this work, we propose a framework based on a detection type called SVFLDetector to detect Byzantine clients in vertical federated learning. The key ideas of SVFLDetector are: (1) we combine decentralized vertical federated learning with split learning, utilizing their respective advantages and eliminating the impact of a third-party server; (2) according to the heterogeneity of features in vertical federated learning, we use a client detection method which is achieved by grouping through feature encoding and performing cross validation within groups to identify Byzantine clients; (3) we propose a penalty function to reduce the impact of Byzantine clients on model aggregation. Numerical experiments show that our method has strong robustness against various Byzantine attacks.
{"title":"SVFLDetector: a decentralized client detection method for Byzantine problem in vertical federated learning","authors":"Jiuyun Xu, Yinyue Jiang, Hanfei Fan, Qiqi Wang","doi":"10.1007/s00607-024-01262-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01262-5","url":null,"abstract":"<p>In recent years, with the deepening of cross-industry cooperation, vertical federated learning with multiple overlapping samples and fewer overlapping features has attracted extensive attention. Vertical federated learning increases the challenge of detecting Byzantine clients due to feature heterogeneity, in contrast to horizontal federated learning. Existing methods for detecting Byzantine clients can be divided into statistical-based and detection-based types. The detection-based type breaks the limit on the number of Byzantine clients. To our knowledge, current research in vertical federated learning relies on the assumption of a reliable third-party coordinator and is based on statistical type. In this work, we propose a framework based on a detection type called SVFLDetector to detect Byzantine clients in vertical federated learning. The key ideas of SVFLDetector are: (1) we combine decentralized vertical federated learning with split learning, utilizing their respective advantages and eliminating the impact of a third-party server; (2) according to the heterogeneity of features in vertical federated learning, we use a client detection method which is achieved by grouping through feature encoding and performing cross validation within groups to identify Byzantine clients; (3) we propose a penalty function to reduce the impact of Byzantine clients on model aggregation. Numerical experiments show that our method has strong robustness against various Byzantine attacks.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"2 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139917658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s00607-024-01266-1
Ekin Ekinci
Solar photovoltaic (PV) energy, with its clean, local, and renewable features, is an effective complement to traditional energy sources today. However, the photovoltaic power system is highly weather-dependent and therefore has unstable and intermittent characteristics. Despite the negative impact of these features on solar sources, the increase in worldwide installed PV capacity has made solar energy prediction an important research topic. This study compares three encoder-decoder (ED) networks for day-ahead solar PV energy prediction: Long Short-Term Memory ED (LSTM-ED), Convolutional LSTM ED (Conv-LSTM-ED), and Convolutional Neural Network and LSTM ED (CNN-LSTM-ED). The models are tested using 1741-day-long datasets from 26 PV panels in Istanbul, Turkey, considering both power and energy output of the panels and meteorological features. The results show that the Conv-LSTM-ED with 50 iterations is the most successful model, achieving an average prediction score of up to 0.88 over R-square (R2). Evaluation of the iteration counts’ effect reveals that the Conv-LSTM-ED with 50 iterations also yields the lowest Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values, confirming its success. In addition, the fitness and effectiveness of the models are evaluated, with the Conv-LSTM-ED achieving the lowest Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) values for each iteration. The findings of this work can help researchers build the best data-driven methods for forecasting PV solar energy based on PV features and meteorological features.
太阳能光伏发电(PV)具有清洁、本地和可再生的特点,是当今传统能源的有效补充。然而,光伏发电系统高度依赖天气,因此具有不稳定和间歇性的特点。尽管这些特点对太阳能产生了负面影响,但随着全球光伏发电装机容量的增加,太阳能预测已成为一个重要的研究课题。本研究比较了三种用于日前太阳能光伏发电能量预测的编码器-解码器(ED)网络:长短期记忆 ED (LSTM-ED)、卷积 LSTM ED (Conv-LSTM-ED) 以及卷积神经网络和 LSTM ED (CNN-LSTM-ED)。这些模型使用来自土耳其伊斯坦布尔 26 个光伏电池板的 1741 天数据集进行了测试,同时考虑了电池板的功率和能量输出以及气象特征。结果表明,迭代次数为 50 次的 Conv-LSTM-ED 是最成功的模型,在 R-square (R2) 上取得了高达 0.88 的平均预测分数。对迭代次数效果的评估显示,迭代 50 次的 Conv-LSTM-ED 模型的均方根误差(RMSE)和平均绝对误差(MAE)值也是最低的,这证明了它的成功。此外,还对模型的适配性和有效性进行了评估,Conv-LSTM-ED 在每次迭代中都获得了最低的 Akaike 信息准则(AIC)和贝叶斯信息准则(BIC)值。这项工作的发现有助于研究人员根据光伏特征和气象特征建立最佳的数据驱动型光伏太阳能预测方法。
{"title":"A comparative study of LSTM-ED architectures in forecasting day-ahead solar photovoltaic energy using Weather Data","authors":"Ekin Ekinci","doi":"10.1007/s00607-024-01266-1","DOIUrl":"https://doi.org/10.1007/s00607-024-01266-1","url":null,"abstract":"<p>Solar photovoltaic (PV) energy, with its clean, local, and renewable features, is an effective complement to traditional energy sources today. However, the photovoltaic power system is highly weather-dependent and therefore has unstable and intermittent characteristics. Despite the negative impact of these features on solar sources, the increase in worldwide installed PV capacity has made solar energy prediction an important research topic. This study compares three encoder-decoder (ED) networks for day-ahead solar PV energy prediction: Long Short-Term Memory ED (LSTM-ED), Convolutional LSTM ED (Conv-LSTM-ED), and Convolutional Neural Network and LSTM ED (CNN-LSTM-ED). The models are tested using 1741-day-long datasets from 26 PV panels in Istanbul, Turkey, considering both power and energy output of the panels and meteorological features. The results show that the Conv-LSTM-ED with 50 iterations is the most successful model, achieving an average prediction score of up to 0.88 over R-square (R<sup>2</sup>). Evaluation of the iteration counts’ effect reveals that the Conv-LSTM-ED with 50 iterations also yields the lowest Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values, confirming its success. In addition, the fitness and effectiveness of the models are evaluated, with the Conv-LSTM-ED achieving the lowest Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) values for each iteration. The findings of this work can help researchers build the best data-driven methods for forecasting PV solar energy based on PV features and meteorological features.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"38 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139917659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.1007/s00607-024-01260-7
Yuan-Ko Huang, Chien-Pang Lee
We present a new type of location-based queries, namely the Budget Range-based All Neighboring Object Group Query (BR-ANOGQ for short), to offer spatial object information while respecting distance and budget range constraints. This query type finds utility in numerous practical scenarios, such as assisting travelers in selecting fitting destinations for their journeys. To support the BR-ANOGQ, we develop data structures for efficient representation of road networks and employ two index structures, the (R^{cC})-tree and the grid index, for managing spatial objects based on their locations and costs. We introduce two pruning criteria to filter out object sets that do not meet the specified distance d and budget range ([bgt_m, bgt_M]) constraints. We also devise a road network traversal method that selectively accesses a small fraction of objects while generating the query result. The BR-ANOGQ algorithm effectively utilizes index structures and pruning criteria for query processing. Through a series of comprehensive experiments, we demonstrate its efficiency in terms of CPU time and index node accesses, providing valuable insights for location-based queries with constraints.
我们提出了一种新型基于位置的查询,即基于预算范围的所有邻近对象组查询(简称 BR-ANOGQ),在尊重距离和预算范围限制的同时提供空间对象信息。这种查询类型在许多实际场景中都很有用,例如帮助旅行者选择合适的旅行目的地。为了支持BR-ANOGQ,我们开发了高效表示道路网络的数据结构,并采用了两种索引结构--(R^{c})树和网格索引--来根据空间对象的位置和成本管理它们。我们引入了两个剪枝标准来过滤不符合指定距离 d 和预算范围 ([bgt_m, bgt_M]) 约束的对象集。我们还设计了一种路网遍历方法,在生成查询结果时选择性地访问一小部分对象。BR-ANOGQ 算法有效地利用了索引结构和剪枝标准进行查询处理。通过一系列综合实验,我们证明了该算法在 CPU 时间和索引节点访问方面的效率,为基于位置的约束查询提供了有价值的见解。
{"title":"Efficient processing of all neighboring object group queries with budget range constraint in road networks","authors":"Yuan-Ko Huang, Chien-Pang Lee","doi":"10.1007/s00607-024-01260-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01260-7","url":null,"abstract":"<p>We present a new type of location-based queries, namely the <i>Budget Range-based All Neighboring Object Group Query</i> (<i>BR</i>-<i>ANOGQ</i> for short), to offer spatial object information while respecting distance and budget range constraints. This query type finds utility in numerous practical scenarios, such as assisting travelers in selecting fitting destinations for their journeys. To support the <i>BR</i>-<i>ANOGQ</i>, we develop data structures for efficient representation of road networks and employ two index structures, the <span>(R^{cC})</span>-tree and the <i>grid index</i>, for managing spatial objects based on their locations and costs. We introduce two pruning criteria to filter out object sets that do not meet the specified distance <i>d</i> and budget range <span>([bgt_m, bgt_M])</span> constraints. We also devise a road network traversal method that selectively accesses a small fraction of objects while generating the query result. The <i>BR</i>-<i>ANOGQ algorithm</i> effectively utilizes index structures and pruning criteria for query processing. Through a series of comprehensive experiments, we demonstrate its efficiency in terms of CPU time and index node accesses, providing valuable insights for location-based queries with constraints.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"44 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139903247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}