Pub Date : 2024-07-09DOI: 10.1007/s00607-024-01315-9
Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago
Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.
{"title":"Exploiting recurrent graph neural networks for suffix prediction in predictive monitoring","authors":"Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago","doi":"10.1007/s00607-024-01315-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01315-9","url":null,"abstract":"<p>Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1007/s00607-024-01308-8
Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou
Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.
{"title":"Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism","authors":"Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou","doi":"10.1007/s00607-024-01308-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01308-8","url":null,"abstract":"<p>Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1007/s00607-024-01318-6
Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan
One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.
{"title":"Optimizing pre-copy live virtual machine migration in cloud computing using machine learning-based prediction model","authors":"Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan","doi":"10.1007/s00607-024-01318-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01318-6","url":null,"abstract":"<p>One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-06DOI: 10.1007/s00607-024-01316-8
Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh
Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.
{"title":"A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks","authors":"Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh","doi":"10.1007/s00607-024-01316-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01316-8","url":null,"abstract":"<p>Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1007/s00607-024-01302-0
Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli
Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.
{"title":"A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm","authors":"Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli","doi":"10.1007/s00607-024-01302-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01302-0","url":null,"abstract":"<p>Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s00607-024-01312-y
Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi
In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.
{"title":"Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure","authors":"Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi","doi":"10.1007/s00607-024-01312-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01312-y","url":null,"abstract":"<p>In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s00607-024-01314-w
Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani
The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.
{"title":"Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain","authors":"Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani","doi":"10.1007/s00607-024-01314-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01314-w","url":null,"abstract":"<p>The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s00607-024-01313-x
Yevhenii Shudrenko, Andreas Timm-Giel
Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.
{"title":"Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics","authors":"Yevhenii Shudrenko, Andreas Timm-Giel","doi":"10.1007/s00607-024-01313-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01313-x","url":null,"abstract":"<p>Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.
{"title":"Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies","authors":"Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi","doi":"10.1007/s00607-024-01311-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01311-z","url":null,"abstract":"<p>Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s00607-024-01306-w
Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu
With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.
随着互联网技术的发展,网络空间安全已成为研究热点。网络流量分类与网络空间安全密切相关。本文研究了基于原始流量数据的分类问题。这涉及到数据包的粒度分析、包头和有效载荷的分离、包头的补充和对齐,以及将其转换为结构化数据,包括比特、字节和分段协议字段三种表示类型。在此基础上,我们提出了 Rew-LSTM 分类模型,并在公开的加密流量数据集上进行了实验,结果表明,仅使用数据包头中的数据进行多重分类就能获得出色的结果,尤其是当数据使用比特表示时,其效果优于最先进的方法。此外,我们还提出了一种全局归一化方法,实验结果表明,对于 Tor 流量和普通加密流量,该方法优于针对特定特征的归一化方法。
{"title":"Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification","authors":"Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu","doi":"10.1007/s00607-024-01306-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01306-w","url":null,"abstract":"<p>With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}