Pub Date : 2023-10-17DOI: 10.3390/computers12100208
Nelson Max
This paper describes a system which takes user input of a pattern of regular polygons around one vertex and attempts to construct a uniform tiling with the same pattern at every vertex by adding one polygon at a time. The system constructs spherical, planar, or hyperbolic tilings when the sum of the interior angles of the user-specified regular polygons is respectively less than, equal to, or greater than 360∘. Other works have catalogued uniform tilings in tables and/or illustrations. In contrast, this system was developed as an interactive educational tool for people to learn about symmetry and tilings by trial and error through proposing potential vertex patterns and investigating whether they work. Users can watch the rest of the polygons being automatically added one by one with recursive backtracking. When a trial polygon addition is found to violate the conditions of a regular tiling, polygons are removed one by one until a configuration with another compatible choice is found, and that choice is tried next.
{"title":"Constructing and Visualizing Uniform Tilings","authors":"Nelson Max","doi":"10.3390/computers12100208","DOIUrl":"https://doi.org/10.3390/computers12100208","url":null,"abstract":"This paper describes a system which takes user input of a pattern of regular polygons around one vertex and attempts to construct a uniform tiling with the same pattern at every vertex by adding one polygon at a time. The system constructs spherical, planar, or hyperbolic tilings when the sum of the interior angles of the user-specified regular polygons is respectively less than, equal to, or greater than 360∘. Other works have catalogued uniform tilings in tables and/or illustrations. In contrast, this system was developed as an interactive educational tool for people to learn about symmetry and tilings by trial and error through proposing potential vertex patterns and investigating whether they work. Users can watch the rest of the polygons being automatically added one by one with recursive backtracking. When a trial polygon addition is found to violate the conditions of a regular tiling, polygons are removed one by one until a configuration with another compatible choice is found, and that choice is tried next.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135994296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-17DOI: 10.3390/computers12100210
Benjamin Warnke, Stefan Fischer, Sven Groppe
Due to increasing digitization, the amount of data in the Internet of Things (IoT) is constantly increasing. In order to be able to process queries efficiently, strategies must, therefore, be found to reduce the transmitted data as much as possible. SPARQL is particularly well-suited to the IoT environment because it can handle various data structures. Due to the flexibility of data structures, however, more data have to be joined again during processing. Therefore, a good join order is crucial as it significantly impacts the number of intermediate results. However, computing the best linking order is an NP-hard problem because the total number of possible linking orders increases exponentially with the number of inputs to be combined. In addition, there are different definitions of optimal join orders. Machine learning uses stochastic methods to achieve good results even with complex problems quickly. Other DBMSs also consider reducing network traffic but neglect the network topology. Network topology is crucial in IoT as devices are not evenly distributed. Therefore, we present new techniques for collaboration between routing, application, and machine learning. Our approach, which pushes the operators as close as possible to the data source, minimizes the produced network traffic by 10%. Additionally, the model can reduce the number of intermediate results by a factor of 100 in comparison to other state-of-the-art approaches.
{"title":"Using Machine Learning and Routing Protocols for Optimizing Distributed SPARQL Queries in Collaboration","authors":"Benjamin Warnke, Stefan Fischer, Sven Groppe","doi":"10.3390/computers12100210","DOIUrl":"https://doi.org/10.3390/computers12100210","url":null,"abstract":"Due to increasing digitization, the amount of data in the Internet of Things (IoT) is constantly increasing. In order to be able to process queries efficiently, strategies must, therefore, be found to reduce the transmitted data as much as possible. SPARQL is particularly well-suited to the IoT environment because it can handle various data structures. Due to the flexibility of data structures, however, more data have to be joined again during processing. Therefore, a good join order is crucial as it significantly impacts the number of intermediate results. However, computing the best linking order is an NP-hard problem because the total number of possible linking orders increases exponentially with the number of inputs to be combined. In addition, there are different definitions of optimal join orders. Machine learning uses stochastic methods to achieve good results even with complex problems quickly. Other DBMSs also consider reducing network traffic but neglect the network topology. Network topology is crucial in IoT as devices are not evenly distributed. Therefore, we present new techniques for collaboration between routing, application, and machine learning. Our approach, which pushes the operators as close as possible to the data source, minimizes the produced network traffic by 10%. Additionally, the model can reduce the number of intermediate results by a factor of 100 in comparison to other state-of-the-art approaches.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136037623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-17DOI: 10.3390/computers12100209
Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe, Ning Weng
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical.
{"title":"On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective","authors":"Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe, Ning Weng","doi":"10.3390/computers12100209","DOIUrl":"https://doi.org/10.3390/computers12100209","url":null,"abstract":"Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136032784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.3390/computers12100207
Christina Volioti, Christos Orovas, Theodosios Sapounidis, George Trachanas, Euclid Keramopoulos
Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual objects interact, thereby facilitating the understanding of complex concepts. Consequently, this research proposes an application, called “Cooking Math”, that utilizes AR to promote active learning in sixth-grade elementary school mathematics. The application comprises various educational games, each presenting a real-life problem, particularly focused on cooking recipes. To evaluate the usability of the proposed AR application, a pilot study was conducted involving three groups: (a) 65 undergraduate philosophy and education students, (b) 74 undergraduate engineering students, and (c) 35 sixth-grade elementary school students. To achieve this, (a) the System Usability Scale (SUS) questionnaire was provided to all participants and (b) semi-structured interviews were organized to gather the participants’ perspectives. The SUS results were quite satisfactory. In addition, the interviews’ outcomes indicated that the elementary students displayed enthusiasm, the philosophy and education students emphasized the pedagogy value of such technology, while the engineering students suggested that further improvements were necessary to enhance the effectiveness of the learning experience.
{"title":"Augmented Reality in Primary Education: An Active Learning Approach in Mathematics","authors":"Christina Volioti, Christos Orovas, Theodosios Sapounidis, George Trachanas, Euclid Keramopoulos","doi":"10.3390/computers12100207","DOIUrl":"https://doi.org/10.3390/computers12100207","url":null,"abstract":"Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual objects interact, thereby facilitating the understanding of complex concepts. Consequently, this research proposes an application, called “Cooking Math”, that utilizes AR to promote active learning in sixth-grade elementary school mathematics. The application comprises various educational games, each presenting a real-life problem, particularly focused on cooking recipes. To evaluate the usability of the proposed AR application, a pilot study was conducted involving three groups: (a) 65 undergraduate philosophy and education students, (b) 74 undergraduate engineering students, and (c) 35 sixth-grade elementary school students. To achieve this, (a) the System Usability Scale (SUS) questionnaire was provided to all participants and (b) semi-structured interviews were organized to gather the participants’ perspectives. The SUS results were quite satisfactory. In addition, the interviews’ outcomes indicated that the elementary students displayed enthusiasm, the philosophy and education students emphasized the pedagogy value of such technology, while the engineering students suggested that further improvements were necessary to enhance the effectiveness of the learning experience.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136114010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.3390/computers12100206
Décio Alves, Fábio Mendonça, Sheikh Shanawaz Mostafa, Fernando Morgado-Dias
Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from 1 min to 1 week, with more articles at lower temporal resolutions. Most works employed neural networks, focusing recently on deep learning models. Among the reported performance metrics, the most prevalent were mean absolute error, mean squared error, and mean absolute percentage error. Considering these metrics, the mean performance of the examined works was 0.56 m/s, 1.10 m/s, and 6.72%, respectively. The results underscore the novel effectiveness of machine learning in predicting wind conditions using high-resolution time data and demonstrated that deep learning models surpassed traditional methods, improving the accuracy of wind speed and direction forecasts. Moreover, it was found that the inclusion of non-wind weather variables does not benefit the model’s overall performance. Further studies are recommended to predict both wind speed and direction using diverse spatial data points, and high-resolution data are recommended along with the usage of deep learning models.
{"title":"The Potential of Machine Learning for Wind Speed and Direction Short-Term Forecasting: A Systematic Review","authors":"Décio Alves, Fábio Mendonça, Sheikh Shanawaz Mostafa, Fernando Morgado-Dias","doi":"10.3390/computers12100206","DOIUrl":"https://doi.org/10.3390/computers12100206","url":null,"abstract":"Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from 1 min to 1 week, with more articles at lower temporal resolutions. Most works employed neural networks, focusing recently on deep learning models. Among the reported performance metrics, the most prevalent were mean absolute error, mean squared error, and mean absolute percentage error. Considering these metrics, the mean performance of the examined works was 0.56 m/s, 1.10 m/s, and 6.72%, respectively. The results underscore the novel effectiveness of machine learning in predicting wind conditions using high-resolution time data and demonstrated that deep learning models surpassed traditional methods, improving the accuracy of wind speed and direction forecasts. Moreover, it was found that the inclusion of non-wind weather variables does not benefit the model’s overall performance. Further studies are recommended to predict both wind speed and direction using diverse spatial data points, and high-resolution data are recommended along with the usage of deep learning models.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135859093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.3390/computers12100205
Imane Zaimi, Abdelali Boushaba, Mohammed Oumsis, Brahim Jabir, Moulay Hafid Aabidi, Adil EL Makrani
Reducing transmission traffic delay is one of the most important issues that need to be considered for routing protocols, especially in the case of multimedia applications over vehicular ad hoc networks (VANET). To this end, we propose an extension of the FzGR (fuzzy geographical routing protocol), named MNH-FGR (multi-next-hops fuzzy geographical routing protocol). MNH-FGR is a multipath protocol that gains great extensibility by employing different link metrics and weight functions. To schedule multimedia traffic among multiple heterogeneous links, MNH-FGR integrates the weighted round-robin (WRR) scheduling algorithm, where the link weights, needed for scheduling, are computed using the multi-constrained QoS metric provided by the FzGR. The main goal is to ensure the stability of the network and the continuity of data flow during transmission. Simulation experiments with NS-2 are presented in order to validate our proposal. Additionally, we present a neural network algorithm to analyze and optimize the performance of routing protocols. The results show that MNH-FGR could satisfy critical multimedia applications with high on-time constraints. Also, the DNN model used can provide insights about which features had an impact on protocol performance.
{"title":"Novel Optimized Strategy Based on Multi-Next-Hops Election to Reduce Video Transmission Delay for GPSR Protocol over VANETs","authors":"Imane Zaimi, Abdelali Boushaba, Mohammed Oumsis, Brahim Jabir, Moulay Hafid Aabidi, Adil EL Makrani","doi":"10.3390/computers12100205","DOIUrl":"https://doi.org/10.3390/computers12100205","url":null,"abstract":"Reducing transmission traffic delay is one of the most important issues that need to be considered for routing protocols, especially in the case of multimedia applications over vehicular ad hoc networks (VANET). To this end, we propose an extension of the FzGR (fuzzy geographical routing protocol), named MNH-FGR (multi-next-hops fuzzy geographical routing protocol). MNH-FGR is a multipath protocol that gains great extensibility by employing different link metrics and weight functions. To schedule multimedia traffic among multiple heterogeneous links, MNH-FGR integrates the weighted round-robin (WRR) scheduling algorithm, where the link weights, needed for scheduling, are computed using the multi-constrained QoS metric provided by the FzGR. The main goal is to ensure the stability of the network and the continuity of data flow during transmission. Simulation experiments with NS-2 are presented in order to validate our proposal. Additionally, we present a neural network algorithm to analyze and optimize the performance of routing protocols. The results show that MNH-FGR could satisfy critical multimedia applications with high on-time constraints. Also, the DNN model used can provide insights about which features had an impact on protocol performance.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135967907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-11DOI: 10.3390/computers12100204
Sikha S. Bagui, Dustin Mink, Subhash C. Bagui, Sakthivel Subramaniam
Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to detect and classify malicious attacks from routine traffic. The ratio of actual attacks to benign data is significantly high and as such forms highly imbalanced datasets. In this work, we address this issue using data resampling techniques. Though there are several oversampling and undersampling techniques available, how these oversampling and undersampling techniques are most effectively used is addressed in this paper. Two oversampling techniques, Borderline SMOTE and SVM-SMOTE, are used for oversampling minority data and random undersampling is used for undersampling majority data. Both the oversampling techniques use KNN after selecting a random minority sample point, hence the impact of varying KNN values on the performance of the oversampling technique is also analyzed. Random Forest is used for classification of the rare attacks. This work is done on a widely used cybersecurity dataset, UNSW-NB15, and the results show that 10% oversampling gives better results for both BMSOTE and SVM-SMOTE.
{"title":"Determining Resampling Ratios Using BSMOTE and SVM-SMOTE for Identifying Rare Attacks in Imbalanced Cybersecurity Data","authors":"Sikha S. Bagui, Dustin Mink, Subhash C. Bagui, Sakthivel Subramaniam","doi":"10.3390/computers12100204","DOIUrl":"https://doi.org/10.3390/computers12100204","url":null,"abstract":"Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to detect and classify malicious attacks from routine traffic. The ratio of actual attacks to benign data is significantly high and as such forms highly imbalanced datasets. In this work, we address this issue using data resampling techniques. Though there are several oversampling and undersampling techniques available, how these oversampling and undersampling techniques are most effectively used is addressed in this paper. Two oversampling techniques, Borderline SMOTE and SVM-SMOTE, are used for oversampling minority data and random undersampling is used for undersampling majority data. Both the oversampling techniques use KNN after selecting a random minority sample point, hence the impact of varying KNN values on the performance of the oversampling technique is also analyzed. Random Forest is used for classification of the rare attacks. This work is done on a widely used cybersecurity dataset, UNSW-NB15, and the results show that 10% oversampling gives better results for both BMSOTE and SVM-SMOTE.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136209682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-10DOI: 10.3390/computers12100203
Yarob Abdullah, Zeinab Movahedi
Two crucial challenges in Industry 4.0 involve maintaining critical latency requirements for data access and ensuring efficient power consumption by field devices. Traditional centralized industrial networks that provide rudimentary data distribution capabilities may not be able to meet such stringent requirements. These requirements cannot be met later due to connection or node failures or extreme performance decadence. To address this problem, this paper focuses on resource-constrained networks of Internet of Things (IoT) systems, exploiting the presence of several more powerful nodes acting as distributed local data storage proxies for every IoT set. To increase the battery lifetime of the network, a number of nodes that are not included in data transmission or data storage are turned off. In this paper, we investigate the issue of maximizing network lifetime, and consider the restrictions on data access latency. For this purpose, data are cached distributively in proxy nodes, leading to a reduction in energy consumption and ultimately maximizing network lifetime. To address this problem, we introduce an energy-aware data management method (EDMM); with the goal of extending network lifetime, select IoT nodes are designated to save data distributively. Our proposed approach (1) makes sure that data access latency is underneath a specified threshold and (2) performs well with respect to network lifetime compared to an offline centralized heuristic algorithm.
{"title":"QoS-Aware and Energy Data Management in Industrial IoT","authors":"Yarob Abdullah, Zeinab Movahedi","doi":"10.3390/computers12100203","DOIUrl":"https://doi.org/10.3390/computers12100203","url":null,"abstract":"Two crucial challenges in Industry 4.0 involve maintaining critical latency requirements for data access and ensuring efficient power consumption by field devices. Traditional centralized industrial networks that provide rudimentary data distribution capabilities may not be able to meet such stringent requirements. These requirements cannot be met later due to connection or node failures or extreme performance decadence. To address this problem, this paper focuses on resource-constrained networks of Internet of Things (IoT) systems, exploiting the presence of several more powerful nodes acting as distributed local data storage proxies for every IoT set. To increase the battery lifetime of the network, a number of nodes that are not included in data transmission or data storage are turned off. In this paper, we investigate the issue of maximizing network lifetime, and consider the restrictions on data access latency. For this purpose, data are cached distributively in proxy nodes, leading to a reduction in energy consumption and ultimately maximizing network lifetime. To address this problem, we introduce an energy-aware data management method (EDMM); with the goal of extending network lifetime, select IoT nodes are designated to save data distributively. Our proposed approach (1) makes sure that data access latency is underneath a specified threshold and (2) performs well with respect to network lifetime compared to an offline centralized heuristic algorithm.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136295071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-08DOI: 10.3390/computers12100202
Jamal Khudair Madhloom, Zainab Hammoodi Noori, Sif K. Ebis, Oday A. Hassen, Saad M. Darwish
Due to the Internet’s explosive growth, network security is now a major concern; as a result, tracking network traffic is essential for a variety of uses, including improving system efficiency, fixing bugs in the network, and keeping sensitive data secure. Firewalls are a crucial component of enterprise-wide security architectures because they protect individual networks from intrusion. The efficiency of a firewall can be negatively impacted by issues with its design, configuration, monitoring, and administration. Recent firewall security methods do not have the rigor to manage the vagueness that comes with filtering packets from the exterior. Knowledge representation and reasoning are two areas where fuzzy Petri nets (FPNs) receive extensive usage as a modeling tool. Despite their widespread success, FPNs’ limitations in the security engineering field stem from the fact that it is difficult to represent different kinds of uncertainty. This article details the construction of a novel packet-filtering firewall model that addresses the limitations of current FPN-based filtering methods. The primary contribution is to employ Simplified Neutrosophic Petri nets (SNPNs) as a tool for modeling discrete event systems in the area of firewall packet filtering that are characterized by imprecise knowledge. Because of SNPNs’ symbolic ability, the packet filtration model can be quickly and easily established, examined, enhanced, and maintained. Based on the idea that the ambiguity of a packet’s movement can be described by if–then fuzzy production rules realized by the truth-membership function, the indeterminacy-membership function, and the falsity-membership functional, we adopt the neutrosophic logic for modelling PN transition objects. In addition, we simulate the dynamic behavior of the tracking system in light of the ambiguity inherent in packet filtering by presenting a two-level filtering method to improve the ranking of the filtering rules list. Results from experiments on a local area network back up the efficacy of the proposed method and illustrate how it can increase the firewall’s susceptibility to threats posed by network traffic.
{"title":"An Information Security Engineering Framework for Modeling Packet Filtering Firewall Using Neutrosophic Petri Nets","authors":"Jamal Khudair Madhloom, Zainab Hammoodi Noori, Sif K. Ebis, Oday A. Hassen, Saad M. Darwish","doi":"10.3390/computers12100202","DOIUrl":"https://doi.org/10.3390/computers12100202","url":null,"abstract":"Due to the Internet’s explosive growth, network security is now a major concern; as a result, tracking network traffic is essential for a variety of uses, including improving system efficiency, fixing bugs in the network, and keeping sensitive data secure. Firewalls are a crucial component of enterprise-wide security architectures because they protect individual networks from intrusion. The efficiency of a firewall can be negatively impacted by issues with its design, configuration, monitoring, and administration. Recent firewall security methods do not have the rigor to manage the vagueness that comes with filtering packets from the exterior. Knowledge representation and reasoning are two areas where fuzzy Petri nets (FPNs) receive extensive usage as a modeling tool. Despite their widespread success, FPNs’ limitations in the security engineering field stem from the fact that it is difficult to represent different kinds of uncertainty. This article details the construction of a novel packet-filtering firewall model that addresses the limitations of current FPN-based filtering methods. The primary contribution is to employ Simplified Neutrosophic Petri nets (SNPNs) as a tool for modeling discrete event systems in the area of firewall packet filtering that are characterized by imprecise knowledge. Because of SNPNs’ symbolic ability, the packet filtration model can be quickly and easily established, examined, enhanced, and maintained. Based on the idea that the ambiguity of a packet’s movement can be described by if–then fuzzy production rules realized by the truth-membership function, the indeterminacy-membership function, and the falsity-membership functional, we adopt the neutrosophic logic for modelling PN transition objects. In addition, we simulate the dynamic behavior of the tracking system in light of the ambiguity inherent in packet filtering by presenting a two-level filtering method to improve the ranking of the filtering rules list. Results from experiments on a local area network back up the efficacy of the proposed method and illustrate how it can increase the firewall’s susceptibility to threats posed by network traffic.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135198626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-08DOI: 10.3390/computers12100201
Avinash Singh, Richard Adeyemi Ikuesan, Hein Venter
The growing sophistication of malware has resulted in diverse challenges, especially among security researchers who are expected to develop mechanisms to thwart these malicious attacks. While security researchers have turned to machine learning to combat this surge in malware attacks and enhance detection and prevention methods, they often encounter limitations when it comes to sourcing malware binaries. This limitation places the burden on malware researchers to create context-specific datasets and detection mechanisms, a time-consuming and intricate process that involves a series of experiments. The lack of accessible analysis reports and a centralized platform for sharing and verifying findings has resulted in many research outputs that can neither be replicated nor validated. To address this critical gap, a malware analysis data curation platform was developed. This platform offers malware researchers a highly customizable feature generation process drawing from analysis data reports, particularly those generated in sandbox-based environments such as Cuckoo Sandbox. To evaluate the effectiveness of the platform, a replication of existing studies was conducted in the form of case studies. These studies revealed that the developed platform offers an effective approach that can aid malware detection research. Moreover, a real-world scenario involving over 3000 ransomware and benign samples for ransomware detection based on PE entropy was explored. This yielded an impressive accuracy score of 98.8% and an AUC of 0.97 when employing the decision tree algorithm, with a low latency of 1.51 ms. These results emphasize the necessity of the proposed platform while demonstrating its capacity to construct a comprehensive detection mechanism. By fostering community-driven interactive databanks, this platform enables the creation of datasets as well as the sharing of reports, both of which can substantially reduce experimentation time and enhance research repeatability.
{"title":"MalFe—Malware Feature Engineering Generation Platform","authors":"Avinash Singh, Richard Adeyemi Ikuesan, Hein Venter","doi":"10.3390/computers12100201","DOIUrl":"https://doi.org/10.3390/computers12100201","url":null,"abstract":"The growing sophistication of malware has resulted in diverse challenges, especially among security researchers who are expected to develop mechanisms to thwart these malicious attacks. While security researchers have turned to machine learning to combat this surge in malware attacks and enhance detection and prevention methods, they often encounter limitations when it comes to sourcing malware binaries. This limitation places the burden on malware researchers to create context-specific datasets and detection mechanisms, a time-consuming and intricate process that involves a series of experiments. The lack of accessible analysis reports and a centralized platform for sharing and verifying findings has resulted in many research outputs that can neither be replicated nor validated. To address this critical gap, a malware analysis data curation platform was developed. This platform offers malware researchers a highly customizable feature generation process drawing from analysis data reports, particularly those generated in sandbox-based environments such as Cuckoo Sandbox. To evaluate the effectiveness of the platform, a replication of existing studies was conducted in the form of case studies. These studies revealed that the developed platform offers an effective approach that can aid malware detection research. Moreover, a real-world scenario involving over 3000 ransomware and benign samples for ransomware detection based on PE entropy was explored. This yielded an impressive accuracy score of 98.8% and an AUC of 0.97 when employing the decision tree algorithm, with a low latency of 1.51 ms. These results emphasize the necessity of the proposed platform while demonstrating its capacity to construct a comprehensive detection mechanism. By fostering community-driven interactive databanks, this platform enables the creation of datasets as well as the sharing of reports, both of which can substantially reduce experimentation time and enhance research repeatability.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}