Abstract The Internet of Things (IoT) is widespread in our lives these days (e.g., Smart homes, smart cities, etc.). Despite its significant role in providing automatic real-time services to users, these devices are highly vulnerable due to their design simplicity and limitations regarding power, CPU, and memory. Tracing network traffic and investigating its behavior helps in building a digital forensics framework to secure IoT networks. This paper proposes a new Network Digital Forensics approach called (NDF IoT). The proposed approach uses the Owl optimizer for selecting the best subset of features that help in identifying suspicious behavior in such environments. The NDF IoT approach is evaluated using the Bot IoT UNSW dataset in terms of detection rate, false alarms, accuracy, and f-score. The approach being proposed has achieved 100% detection rate and 99.3% f-score and outperforms related works that used the same dataset while reducing the number of features to three features only.
{"title":"A New Network Digital Forensics Approach for Internet of Things Environment Based on Binary Owl Optimizer","authors":"Hadeel Alazzam, Orieb Abualghanam, Qusay M. Al-zoubi, Abdulsalam Alsmady, Esraa Alhenawi","doi":"10.2478/cait-2022-0033","DOIUrl":"https://doi.org/10.2478/cait-2022-0033","url":null,"abstract":"Abstract The Internet of Things (IoT) is widespread in our lives these days (e.g., Smart homes, smart cities, etc.). Despite its significant role in providing automatic real-time services to users, these devices are highly vulnerable due to their design simplicity and limitations regarding power, CPU, and memory. Tracing network traffic and investigating its behavior helps in building a digital forensics framework to secure IoT networks. This paper proposes a new Network Digital Forensics approach called (NDF IoT). The proposed approach uses the Owl optimizer for selecting the best subset of features that help in identifying suspicious behavior in such environments. The NDF IoT approach is evaluated using the Bot IoT UNSW dataset in terms of detection rate, false alarms, accuracy, and f-score. The approach being proposed has achieved 100% detection rate and 99.3% f-score and outperforms related works that used the same dataset while reducing the number of features to three features only.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42331320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Self-adaptable system concerns on service adaptation whenever errors persist within the system. Changes in contextual information such as networks or sensors will affect the system’s effectiveness because the service adaptation process is not comprehensively handled in those contexts. Besides, the correctness to get the most equivalence services to be substituted is limitedly being addressed from previous works. A dynamic service adaptation framework is introduced to monitor and run a reasoning control to solve these issues. Hence, this paper presents a case study to proof the dynamic service adaptation framework that leverages on semantic-based approach in a context-aware environment. The evaluation of the case study resulted in a significant difference for the effectiveness at a 95% confidence level, which can be interpreted to confirm that the framework is promising to be used in operating dynamic adaptation process in a pervasive environment.
{"title":"Semantic-Based Dynamic Service Adaptation in Context-Aware Mobile Cloud Learning","authors":"S. Muhamad, N. Admodisastro, H. Osman, N. M. Ali","doi":"10.2478/cait-2022-0030","DOIUrl":"https://doi.org/10.2478/cait-2022-0030","url":null,"abstract":"Abstract Self-adaptable system concerns on service adaptation whenever errors persist within the system. Changes in contextual information such as networks or sensors will affect the system’s effectiveness because the service adaptation process is not comprehensively handled in those contexts. Besides, the correctness to get the most equivalence services to be substituted is limitedly being addressed from previous works. A dynamic service adaptation framework is introduced to monitor and run a reasoning control to solve these issues. Hence, this paper presents a case study to proof the dynamic service adaptation framework that leverages on semantic-based approach in a context-aware environment. The evaluation of the case study resulted in a significant difference for the effectiveness at a 95% confidence level, which can be interpreted to confirm that the framework is promising to be used in operating dynamic adaptation process in a pervasive environment.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42870664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Given the face spoofing attack, adequate protection of human identity through face has become a significant challenge globally. Face spoofing is an act of presenting a recaptured frame before the verification device to gain illegal access on behalf of a legitimate person with or without their concern. Several methods have been proposed to detect face spoofing attacks over the last decade. However, these methods only consider the luminance information, reflecting poor discrimination of spoofed face from the genuine face. This article proposes a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features. This paper analyzes three color spaces (i.e., RGB, HSV, and YCrCb) to understand the impact of the color distribution on real and spoofed faces for the NUAA benchmark dataset. In-depth analysis of experimental results and comparison with other existing approaches show the superiority and effectiveness of our proposed models.
{"title":"A Color-Texture-Based Deep Neural Network Technique to Detect Face Spoofing Attacks","authors":"Mayank Kumar Rusia, D. Singh","doi":"10.2478/cait-2022-0032","DOIUrl":"https://doi.org/10.2478/cait-2022-0032","url":null,"abstract":"Abstract Given the face spoofing attack, adequate protection of human identity through face has become a significant challenge globally. Face spoofing is an act of presenting a recaptured frame before the verification device to gain illegal access on behalf of a legitimate person with or without their concern. Several methods have been proposed to detect face spoofing attacks over the last decade. However, these methods only consider the luminance information, reflecting poor discrimination of spoofed face from the genuine face. This article proposes a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features. This paper analyzes three color spaces (i.e., RGB, HSV, and YCrCb) to understand the impact of the color distribution on real and spoofed faces for the NUAA benchmark dataset. In-depth analysis of experimental results and comparison with other existing approaches show the superiority and effectiveness of our proposed models.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48811114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the advancement in technological world, the technologies like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are gaining more popularity in many applications of computer vision like object classification, object detection, Human detection, etc., ML and DL approaches are highly compute-intensive and require advanced computational resources for implementation. Multicore CPUs and GPUs with a large number of dedicated processor cores are typically the more prevailing and effective solutions for the high computational need. In this manuscript, we have come up with an analysis of how these multicore hardware technologies respond to DL algorithms. A Convolutional Neural Network (CNN) model have been trained for three different classification problems using three different datasets. All these experimentations have been performed on three different computational resources, i.e., Raspberry Pi, Nvidia Jetson Nano Board, & desktop computer. Results are derived for performance analysis in terms of classification accuracy and hardware response for each hardware configuration.
{"title":"Hardware Response and Performance Analysis of Multicore Computing Systems for Deep Learning Algorithms","authors":"Lalit Kumar, D. Singh","doi":"10.2478/cait-2022-0028","DOIUrl":"https://doi.org/10.2478/cait-2022-0028","url":null,"abstract":"Abstract With the advancement in technological world, the technologies like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are gaining more popularity in many applications of computer vision like object classification, object detection, Human detection, etc., ML and DL approaches are highly compute-intensive and require advanced computational resources for implementation. Multicore CPUs and GPUs with a large number of dedicated processor cores are typically the more prevailing and effective solutions for the high computational need. In this manuscript, we have come up with an analysis of how these multicore hardware technologies respond to DL algorithms. A Convolutional Neural Network (CNN) model have been trained for three different classification problems using three different datasets. All these experimentations have been performed on three different computational resources, i.e., Raspberry Pi, Nvidia Jetson Nano Board, & desktop computer. Results are derived for performance analysis in terms of classification accuracy and hardware response for each hardware configuration.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43327112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Malware attacks cause great harms in the contemporary information systems and that requires analysis of computer networks reaction in case of malware impact. The focus of the present study is on the analysis of the computer network’s states and reactions in case of malware attacks defined by the susceptibility, exposition, infection and recoverability of computer nodes. Two scenarios are considered – equilibrium without secure software and not equilibrium with secure software in the computer network. The behavior of the computer network under a malware attack is described by a system of nonhomogeneous differential equations. The system of the nonhomogeneous differential equations is solved, and analytical expressions are derived to analyze network characteristics in case of susceptibility, exposition, infection and recoverability of computer nodes during malware attack. The analytical expressions derived are illustrated with results of numerical experiments. The conception developed in this work can be applied to control, prevent and protect computer networks from malware intrusions.
{"title":"Mathematical Modelling of Malware Intrusion in Computer Networks","authors":"Andon Lazarov","doi":"10.2478/cait-2022-0026","DOIUrl":"https://doi.org/10.2478/cait-2022-0026","url":null,"abstract":"Abstract Malware attacks cause great harms in the contemporary information systems and that requires analysis of computer networks reaction in case of malware impact. The focus of the present study is on the analysis of the computer network’s states and reactions in case of malware attacks defined by the susceptibility, exposition, infection and recoverability of computer nodes. Two scenarios are considered – equilibrium without secure software and not equilibrium with secure software in the computer network. The behavior of the computer network under a malware attack is described by a system of nonhomogeneous differential equations. The system of the nonhomogeneous differential equations is solved, and analytical expressions are derived to analyze network characteristics in case of susceptibility, exposition, infection and recoverability of computer nodes during malware attack. The analytical expressions derived are illustrated with results of numerical experiments. The conception developed in this work can be applied to control, prevent and protect computer networks from malware intrusions.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49571122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract HPC clouds may provide fast access to fully configurable and dynamically scalable virtualized HPC clusters to address the complex and challenging computation and storage-intensive requirements. The complex environmental, software, and hardware requirements and dependencies on such systems make it challenging to carry out our large-scale simulations, prediction systems, and other data and compute-intensive workloads over the cloud. The article aims to present an architecture that enables HPC workloads to be serverless over the cloud (Shoc), one of the most critical cloud capabilities for HPC workloads. On one hand, Shoc utilizes the abstraction power of container technologies like Singularity and Docker, combined with the scheduling and resource management capabilities of Kubernetes. On the other hand, Shoc allows running any CPU-intensive and data-intensive workloads in the cloud without needing to manage HPC infrastructure, complex software, and hardware environment deployments.
{"title":"Serverless High-Performance Computing over Cloud","authors":"Davit Petrosyan, H. Astsatryan","doi":"10.2478/cait-2022-0029","DOIUrl":"https://doi.org/10.2478/cait-2022-0029","url":null,"abstract":"Abstract HPC clouds may provide fast access to fully configurable and dynamically scalable virtualized HPC clusters to address the complex and challenging computation and storage-intensive requirements. The complex environmental, software, and hardware requirements and dependencies on such systems make it challenging to carry out our large-scale simulations, prediction systems, and other data and compute-intensive workloads over the cloud. The article aims to present an architecture that enables HPC workloads to be serverless over the cloud (Shoc), one of the most critical cloud capabilities for HPC workloads. On one hand, Shoc utilizes the abstraction power of container technologies like Singularity and Docker, combined with the scheduling and resource management capabilities of Kubernetes. On the other hand, Shoc allows running any CPU-intensive and data-intensive workloads in the cloud without needing to manage HPC infrastructure, complex software, and hardware environment deployments.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45007768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Convolutional Neural Networks (CNN) have been widely utilized for Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) images. However, a large number of parameters and a huge training data requirements limit CNN’s use in SAR ATR. While previous works have primarily focused on model compression and structural modification of CNN, this paper employs the One-Vs-All (OVA) technique on CNN to address these issues. OVA-CNN comprises several Binary classifying CNNs (BCNNs) that act as an expert in correctly recognizing a single target. The BCNN that predicts the highest probability for a given target determines the class to which the target belongs. The evaluation of the model using various metrics on the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark dataset illustrates that the OVA-CNN has fewer weight parameters and training sample requirements while exhibiting a high recognition rate.
{"title":"One-vs-All Convolutional Neural Networks for Synthetic Aperture Radar Target Recognition","authors":"B. P. Babu, S. Narayanan","doi":"10.2478/cait-2022-0035","DOIUrl":"https://doi.org/10.2478/cait-2022-0035","url":null,"abstract":"Abstract Convolutional Neural Networks (CNN) have been widely utilized for Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) images. However, a large number of parameters and a huge training data requirements limit CNN’s use in SAR ATR. While previous works have primarily focused on model compression and structural modification of CNN, this paper employs the One-Vs-All (OVA) technique on CNN to address these issues. OVA-CNN comprises several Binary classifying CNNs (BCNNs) that act as an expert in correctly recognizing a single target. The BCNN that predicts the highest probability for a given target determines the class to which the target belongs. The evaluation of the model using various metrics on the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark dataset illustrates that the OVA-CNN has fewer weight parameters and training sample requirements while exhibiting a high recognition rate.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42579608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Hoang Son, J. Demetrovics, V. D. Thi, Nguyen Ngoc Thuy
Abstract As a basic notion in algebra, closure operations have been successfully applied to many fields of computer science. In this paper we study dense family in the closure operations. In particular, we prove some families to be dense in any closure operation, in which the greatest and smallest dense families, including the collection of the whole closed sets and the minimal generator of the closed sets, are also pointed out. More important, a necessary and sufficient condition for an arbitrary family to be dense is provided in our paper. Then we use these dense families to characterize minimal keys of the closure operation under the viewpoint of transversal hypergraphs and construct an algorithm for determining the minimal keys of a closure operation.
{"title":"Investigation of Dense Family of Closure Operations","authors":"Nguyen Hoang Son, J. Demetrovics, V. D. Thi, Nguyen Ngoc Thuy","doi":"10.2478/cait-2022-0025","DOIUrl":"https://doi.org/10.2478/cait-2022-0025","url":null,"abstract":"Abstract As a basic notion in algebra, closure operations have been successfully applied to many fields of computer science. In this paper we study dense family in the closure operations. In particular, we prove some families to be dense in any closure operation, in which the greatest and smallest dense families, including the collection of the whole closed sets and the minimal generator of the closed sets, are also pointed out. More important, a necessary and sufficient condition for an arbitrary family to be dense is provided in our paper. Then we use these dense families to characterize minimal keys of the closure operation under the viewpoint of transversal hypergraphs and construct an algorithm for determining the minimal keys of a closure operation.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44378789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Open Radio Access Network (O-RAN) is a concept that aims at embedding intelligence at the network edge and at disaggregating of network functionality from the hardware. The paper studies how the O-RAN concept can be used for optimization of radio resource management. The research focuses on adaptive radio resource allocation based on predictions of device activity. For narrowband devices which send sporadically small volumes of data, a feature is defined which enables a device with no activity for a short time to suspend its session and to resume it moving in active state. Dynamic configuration of the inactivity timer based on prediction of device activity may further optimize radio resource allocation. The paper studies an O-RAN use case for dynamic radio resource control and presents the results of emulation of the RESTful interface defined between the O-RAN non-real-time and near real-time functions.
{"title":"Toward Programmability of Radio Resource Control Based on O-RAN","authors":"E. Pencheva, I. Atanasov","doi":"10.2478/cait-2022-0034","DOIUrl":"https://doi.org/10.2478/cait-2022-0034","url":null,"abstract":"Abstract Open Radio Access Network (O-RAN) is a concept that aims at embedding intelligence at the network edge and at disaggregating of network functionality from the hardware. The paper studies how the O-RAN concept can be used for optimization of radio resource management. The research focuses on adaptive radio resource allocation based on predictions of device activity. For narrowband devices which send sporadically small volumes of data, a feature is defined which enables a device with no activity for a short time to suspend its session and to resume it moving in active state. Dynamic configuration of the inactivity timer based on prediction of device activity may further optimize radio resource allocation. The paper studies an O-RAN use case for dynamic radio resource control and presents the results of emulation of the RESTful interface defined between the O-RAN non-real-time and near real-time functions.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41311956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsvetalin Totev, N. Bocheva, S. Stefanov, M. Mihaylova
Abstract In many visual perception studies, external visual noise is used as a methodology to broaden the understanding of information processing of visual stimuli. The underlying assumption is that two sources of noise limit sensory processing: the external noise inherent in the environmental signals and the internal noise or internal variability at different levels of the neural system. Usually, when external noise is added to an image, it is evenly distributed. However, the color intensity and image contrast are modified in this way, and it is unclear whether the visual system responds to their change or the noise presence. We aimed to develop several methods of noise generation with different distributions that keep the global image characteristics. These methods are appropriate in various applications for evaluating the internal noise in the visual system and its ability to filter the added noise. As these methods destroy the correlation in image intensity of neighboring pixels, they could be used to evaluate the role of local spatial structure in image processing.
{"title":"Noise Generation Methods Preserving Image Color Intensity Distributions","authors":"Tsvetalin Totev, N. Bocheva, S. Stefanov, M. Mihaylova","doi":"10.2478/cait-2022-0031","DOIUrl":"https://doi.org/10.2478/cait-2022-0031","url":null,"abstract":"Abstract In many visual perception studies, external visual noise is used as a methodology to broaden the understanding of information processing of visual stimuli. The underlying assumption is that two sources of noise limit sensory processing: the external noise inherent in the environmental signals and the internal noise or internal variability at different levels of the neural system. Usually, when external noise is added to an image, it is evenly distributed. However, the color intensity and image contrast are modified in this way, and it is unclear whether the visual system responds to their change or the noise presence. We aimed to develop several methods of noise generation with different distributions that keep the global image characteristics. These methods are appropriate in various applications for evaluating the internal noise in the visual system and its ability to filter the added noise. As these methods destroy the correlation in image intensity of neighboring pixels, they could be used to evaluate the role of local spatial structure in image processing.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43005683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}