Pub Date : 2024-07-08DOI: 10.1007/s00607-024-01308-8
Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou
Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.
{"title":"Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism","authors":"Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou","doi":"10.1007/s00607-024-01308-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01308-8","url":null,"abstract":"<p>Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"37 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-06DOI: 10.1007/s00607-024-01316-8
Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh
Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.
{"title":"A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks","authors":"Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh","doi":"10.1007/s00607-024-01316-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01316-8","url":null,"abstract":"<p>Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"35 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1007/s00607-024-01302-0
Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli
Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.
{"title":"A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm","authors":"Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli","doi":"10.1007/s00607-024-01302-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01302-0","url":null,"abstract":"<p>Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"58 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s00607-024-01312-y
Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi
In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.
{"title":"Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure","authors":"Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi","doi":"10.1007/s00607-024-01312-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01312-y","url":null,"abstract":"<p>In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"34 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s00607-024-01314-w
Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani
The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.
{"title":"Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain","authors":"Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani","doi":"10.1007/s00607-024-01314-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01314-w","url":null,"abstract":"<p>The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"28 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s00607-024-01313-x
Yevhenii Shudrenko, Andreas Timm-Giel
Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.
{"title":"Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics","authors":"Yevhenii Shudrenko, Andreas Timm-Giel","doi":"10.1007/s00607-024-01313-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01313-x","url":null,"abstract":"<p>Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"14 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.
{"title":"Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies","authors":"Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi","doi":"10.1007/s00607-024-01311-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01311-z","url":null,"abstract":"<p>Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"2 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s00607-024-01306-w
Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu
With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.
随着互联网技术的发展,网络空间安全已成为研究热点。网络流量分类与网络空间安全密切相关。本文研究了基于原始流量数据的分类问题。这涉及到数据包的粒度分析、包头和有效载荷的分离、包头的补充和对齐,以及将其转换为结构化数据,包括比特、字节和分段协议字段三种表示类型。在此基础上,我们提出了 Rew-LSTM 分类模型,并在公开的加密流量数据集上进行了实验,结果表明,仅使用数据包头中的数据进行多重分类就能获得出色的结果,尤其是当数据使用比特表示时,其效果优于最先进的方法。此外,我们还提出了一种全局归一化方法,实验结果表明,对于 Tor 流量和普通加密流量,该方法优于针对特定特征的归一化方法。
{"title":"Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification","authors":"Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu","doi":"10.1007/s00607-024-01306-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01306-w","url":null,"abstract":"<p>With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"15 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s00607-024-01309-7
Selcuk Aslan
The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.
{"title":"Back-and-Forth (BaF): a new greedy algorithm for geometric path planning of unmanned aerial vehicles","authors":"Selcuk Aslan","doi":"10.1007/s00607-024-01309-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01309-7","url":null,"abstract":"<p>The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"80 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s00607-024-01300-2
Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran
In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (RX) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.
{"title":"AI enabled: a novel IoT-based fake currency detection using millimeter wave (mmWave) sensor","authors":"Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran","doi":"10.1007/s00607-024-01300-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01300-2","url":null,"abstract":"<p>In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (<i>RX</i>) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}