Recently, different attempts have been made to characterize information security threats, particularly in the industrial sector. Yet, there have been a number of mysterious threats that could jeopardize the safety of food processing industry data, information, and resources. This research paper aims to increase the efficiency of information security risk analysis in food processing industrial information systems, and the participants in this study were experts in executive management, regular staff, technical and asset operators, third-party consultancy companies, and risk management professionals from the food processing sector in Sub-Saharan Africa. A questionnaire and interview with a variety of questions using qualitative and quantitative risk analysis approaches were used to gather the risk identifications, and the fuzzy inference system method was also applied to analyze the risk factor in this paper. The findings revealed that among information security concerns, electronic data in a data theft threat has a high-risk outcome of 75.67%, and human resource management (HRM) in a social engineering threat has a low-risk impact of 26.67%. Thus, the high-probability risk factors need quick action, and the risk components with a high probability call for rapid corrective action. Finally, the root causes of such threats should be identified and controlled before experiencing detrimental effects. It's also important to note that primary interests and worldwide policies must be taken into consideration while examining information security in food processing industrial information systems.
{"title":"Анализ рисков информационной безопасности в пищевой промышленности с использованием системы нечеткого вывода","authors":"Amanuel Asfha, Abhishek Vaish","doi":"10.15622/ia.22.5.5","DOIUrl":"https://doi.org/10.15622/ia.22.5.5","url":null,"abstract":"Recently, different attempts have been made to characterize information security threats, particularly in the industrial sector. Yet, there have been a number of mysterious threats that could jeopardize the safety of food processing industry data, information, and resources. This research paper aims to increase the efficiency of information security risk analysis in food processing industrial information systems, and the participants in this study were experts in executive management, regular staff, technical and asset operators, third-party consultancy companies, and risk management professionals from the food processing sector in Sub-Saharan Africa. A questionnaire and interview with a variety of questions using qualitative and quantitative risk analysis approaches were used to gather the risk identifications, and the fuzzy inference system method was also applied to analyze the risk factor in this paper. The findings revealed that among information security concerns, electronic data in a data theft threat has a high-risk outcome of 75.67%, and human resource management (HRM) in a social engineering threat has a low-risk impact of 26.67%. Thus, the high-probability risk factors need quick action, and the risk components with a high probability call for rapid corrective action. Finally, the root causes of such threats should be identified and controlled before experiencing detrimental effects. It's also important to note that primary interests and worldwide policies must be taken into consideration while examining information security in food processing industrial information systems.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Difficulties in algorithmic simulation of natural thinking point to the inadequacy of information encodings used to this end. The promising approach to this problem represents information by the qubit states of quantum theory, structurally aligned with major theories of cognitive semantics. The paper develops this idea by linking qubit states with color as fundamental carrier of affective meaning. The approach builds on geometric affinity of Hilbert space of qubit states and color solids, used to establish precise one-to-one mapping between them. This is enabled by original decomposition of qubit in three non-orthogonal basis vectors corresponding to red, green, and blue colors. Real-valued coefficients of such decomposition are identical to the tomograms of the qubit state in the corresponding directions, related to ordinary Stokes parameters by rotational transform. Classical compositions of black, white and six main colors (red, green, blue, yellow, magenta and cyan) are then mapped to analogous superposition of the qubit states. Pure and mixed colors intuitively map to pure and mixed qubit states on the surface and in the volume of the Bloch ball, while grayscale is mapped to the diameter of the Bloch sphere. Herewith, the lightness of color corresponds to the probability of the qubit’s basis state «1», while saturation and hue encode coherence and phase of the qubit, respectively. The developed code identifies color as a bridge between quantum-theoretic formalism and qualitative regularities of the natural mind. This opens prospects for deeper integration of quantum informatics in semantic analysis of data, image processing, and the development of nature-like computational architectures.
{"title":"Цветовая кодировка кубитных состояний","authors":"Ilya Surov","doi":"10.15622/ia.22.5.9","DOIUrl":"https://doi.org/10.15622/ia.22.5.9","url":null,"abstract":"Difficulties in algorithmic simulation of natural thinking point to the inadequacy of information encodings used to this end. The promising approach to this problem represents information by the qubit states of quantum theory, structurally aligned with major theories of cognitive semantics. The paper develops this idea by linking qubit states with color as fundamental carrier of affective meaning. The approach builds on geometric affinity of Hilbert space of qubit states and color solids, used to establish precise one-to-one mapping between them. This is enabled by original decomposition of qubit in three non-orthogonal basis vectors corresponding to red, green, and blue colors. Real-valued coefficients of such decomposition are identical to the tomograms of the qubit state in the corresponding directions, related to ordinary Stokes parameters by rotational transform. Classical compositions of black, white and six main colors (red, green, blue, yellow, magenta and cyan) are then mapped to analogous superposition of the qubit states. Pure and mixed colors intuitively map to pure and mixed qubit states on the surface and in the volume of the Bloch ball, while grayscale is mapped to the diameter of the Bloch sphere. Herewith, the lightness of color corresponds to the probability of the qubit’s basis state «1», while saturation and hue encode coherence and phase of the qubit, respectively. The developed code identifies color as a bridge between quantum-theoretic formalism and qualitative regularities of the natural mind. This opens prospects for deeper integration of quantum informatics in semantic analysis of data, image processing, and the development of nature-like computational architectures.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Makar Pelogeiko, Stanislav Sartasov, Oleg Granichin
Extending smartphone working time is an ongoing endeavour becoming more and more important with each passing year. It could be achieved by more advanced hardware or by introducing energy-aware practices to software, and the latter is a more accessible approach. As the CPU is one of the most power-hungry smartphone devices, Dynamic Voltage Frequency Scaling (DVFS) is a technique to adjust CPU frequency to the current computational needs, and different algorithms were already developed, both energy-aware and energy-agnostic kinds. Following our previous work on the subject, we propose a novel DVFS approach to use simultaneous perturbation stochastic approximation (SPSA) with two noisy observations for tracking the optimal frequency and implementing several algorithms based on it. Moreover, we also address an issue of hardware lag between a signal for the CPU to change frequency and its actual update. As Android OS could use a default task scheduler or an energy-aware one, which is capable of taking advantage of heterogeneous mobile CPU architectures such as ARM big.LITTLE, we also explore an integration scheme between the proposed algorithms and OS schedulers. A model-based testing methodology to compare the developed algorithms against existing ones is presented, and a test suite reflecting real-world use case scenarios is outlined. Our experiments show that the SPSA-based algorithm works well with EAS with a simplified integration scheme, showing CPU performance comparable to other energy-aware DVFS algorithms and a decreased energy consumption.
{"title":"On Stochastic Optimization for Smartphone CPU Energy Consumption Decrease","authors":"Makar Pelogeiko, Stanislav Sartasov, Oleg Granichin","doi":"10.15622/ia.22.5.3","DOIUrl":"https://doi.org/10.15622/ia.22.5.3","url":null,"abstract":"Extending smartphone working time is an ongoing endeavour becoming more and more important with each passing year. It could be achieved by more advanced hardware or by introducing energy-aware practices to software, and the latter is a more accessible approach. As the CPU is one of the most power-hungry smartphone devices, Dynamic Voltage Frequency Scaling (DVFS) is a technique to adjust CPU frequency to the current computational needs, and different algorithms were already developed, both energy-aware and energy-agnostic kinds. Following our previous work on the subject, we propose a novel DVFS approach to use simultaneous perturbation stochastic approximation (SPSA) with two noisy observations for tracking the optimal frequency and implementing several algorithms based on it. Moreover, we also address an issue of hardware lag between a signal for the CPU to change frequency and its actual update. As Android OS could use a default task scheduler or an energy-aware one, which is capable of taking advantage of heterogeneous mobile CPU architectures such as ARM big.LITTLE, we also explore an integration scheme between the proposed algorithms and OS schedulers. A model-based testing methodology to compare the developed algorithms against existing ones is presented, and a test suite reflecting real-world use case scenarios is outlined. Our experiments show that the SPSA-based algorithm works well with EAS with a simplified integration scheme, showing CPU performance comparable to other energy-aware DVFS algorithms and a decreased energy consumption.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135865018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evgenia Novikova, Elena Fedorchenko, Igor Kotenko, Ivan Kholod
To provide an accurate and timely response to different types of attacks, intrusion detection systems collect and analyze a large amount of data, which may include information with limited access, such as personal data or trade secrets. Consequently, such systems can be seen as an additional source of risks associated with handling sensitive information and breaching its security. Applying the federated learning paradigm to build analytical models for attack and anomaly detection can significantly reduce such risks because locally generated data is not transmitted to any third party, and model training is done locally - on the data sources. Using federated training for intrusion detection solves the problem of training on data that belongs to different organizations, and which, due to the need to protect commercial or other secrets, cannot be placed in the public domain. Thus, this approach also allows us to expand and diversify the set of data on which machine learning models are trained, thereby increasing the level of detectability of heterogeneous attacks. Due to the fact that this approach can overcome the aforementioned problems, it is actively used to design new approaches for intrusion and anomaly detection. The authors systematically explore existing solutions for intrusion and anomaly detection based on federated learning, study their advantages, and formulate open challenges associated with its application in practice. Particular attention is paid to the architecture of the proposed systems, the intrusion detection methods and models used, and approaches for modeling interactions between multiple system users and distributing data among them are discussed. The authors conclude by formulating open problems that need to be solved in order to apply federated learning-based intrusion detection systems in practice.
{"title":"Аналитический обзор подходов к обнаружению вторжений, основанных на федеративном обучении: преимущества использования и открытые задачи","authors":"Evgenia Novikova, Elena Fedorchenko, Igor Kotenko, Ivan Kholod","doi":"10.15622/ia.22.5.4","DOIUrl":"https://doi.org/10.15622/ia.22.5.4","url":null,"abstract":"To provide an accurate and timely response to different types of attacks, intrusion detection systems collect and analyze a large amount of data, which may include information with limited access, such as personal data or trade secrets. Consequently, such systems can be seen as an additional source of risks associated with handling sensitive information and breaching its security. Applying the federated learning paradigm to build analytical models for attack and anomaly detection can significantly reduce such risks because locally generated data is not transmitted to any third party, and model training is done locally - on the data sources. Using federated training for intrusion detection solves the problem of training on data that belongs to different organizations, and which, due to the need to protect commercial or other secrets, cannot be placed in the public domain. Thus, this approach also allows us to expand and diversify the set of data on which machine learning models are trained, thereby increasing the level of detectability of heterogeneous attacks. Due to the fact that this approach can overcome the aforementioned problems, it is actively used to design new approaches for intrusion and anomaly detection. The authors systematically explore existing solutions for intrusion and anomaly detection based on federated learning, study their advantages, and formulate open challenges associated with its application in practice. Particular attention is paid to the architecture of the proposed systems, the intrusion detection methods and models used, and approaches for modeling interactions between multiple system users and distributing data among them are discussed. The authors conclude by formulating open problems that need to be solved in order to apply federated learning-based intrusion detection systems in practice.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of modern image processing, the emphasis often lies on engineering-based approaches rather than scientific solutions to address diverse practical problems. One prevalent task within this domain involves the skeletonization of binary images. Skeletonization is a powerful process for extracting the skeleton of objects located in digital binary images. This process is widely employed for automating many tasks in numerous fields such as pattern recognition, robot vision, animation, and image analysis. The existing skeletonization techniques are mainly based on three approaches: boundary erosion, distance coding, and Voronoi diagram for identifying an approximate skeleton. In this work, we present an empirical evaluation of a set of well-known techniques and report our findings. We specifically deal with computing skeletons in 2d binary images by selecting different approaches and evaluating their effectiveness. Visual evaluation is the primary method used to showcase the performance of selected skeletonization algorithms. Due to the absence of a definitive definition for the "true" skeleton of a digital object, accurately assessing the effectiveness of skeletonization algorithms poses a significant research challenge. Although researchers have attempted quantitative assessments, these measures are typically customized for specific domains and may not be suitable for our current work. The experimental results shown in this work illustrate the performance of the three main approaches in applying skeletonization with respect to different perspectives.
{"title":"Оценка методов скелетизации двумерных бинарных изображений","authors":"Shadi Abudalfa","doi":"10.15622/ia.22.5.7","DOIUrl":"https://doi.org/10.15622/ia.22.5.7","url":null,"abstract":"In the realm of modern image processing, the emphasis often lies on engineering-based approaches rather than scientific solutions to address diverse practical problems. One prevalent task within this domain involves the skeletonization of binary images. Skeletonization is a powerful process for extracting the skeleton of objects located in digital binary images. This process is widely employed for automating many tasks in numerous fields such as pattern recognition, robot vision, animation, and image analysis. The existing skeletonization techniques are mainly based on three approaches: boundary erosion, distance coding, and Voronoi diagram for identifying an approximate skeleton. In this work, we present an empirical evaluation of a set of well-known techniques and report our findings. We specifically deal with computing skeletons in 2d binary images by selecting different approaches and evaluating their effectiveness. Visual evaluation is the primary method used to showcase the performance of selected skeletonization algorithms. Due to the absence of a definitive definition for the \"true\" skeleton of a digital object, accurately assessing the effectiveness of skeletonization algorithms poses a significant research challenge. Although researchers have attempted quantitative assessments, these measures are typically customized for specific domains and may not be suitable for our current work. The experimental results shown in this work illustrate the performance of the three main approaches in applying skeletonization with respect to different perspectives.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the main tools for recording auroras is the optical observation of the sky in automatic mode using all-sky cameras. The results of observations are recorded in special mnemonic tables, ascaplots. Ascaplots provide daily information on the presence or absence of cloud cover and auroras in various parts of the sky and are traditionally used to study the daily distribution of auroras in a given spatial region, as well as to calculate the probability of their observation in other regions in accordance with the level of geomagnetic activity. At the same time, the processing of ascaplots is currently carried out manually, which is associated with significant time costs and a high proportion of errors due to the human factor. To increase the efficiency of ascaplot processing, we propose an approach that automates the recognition and digitization of data from optical observations of auroras. A formalization of the ascaplot structure is proposed, which is used to process the ascaplot image, extract the corresponding observation results, and form the resulting data set. The approach involves the use of machine vision algorithms and the use of a specialized mask - a debug image for digitization, which is a color image in which the general position of the ascaplot cells is specified. The proposed approach and the corresponding algorithms are implemented in the form of software that provides recognition and digitization of archival data from optical observations of auroras. The solution is a single-user desktop software that allows the user to convert ascaplot images into tables in batch mode, available for further processing and analysis. The results of the computational experiments have shown that the use of the proposed software will make it possible to avoid errors in the digitization of ascaplots, on the one hand, and significantly increase the speed of the corresponding computational operations, on the other. Taken together, this will improve the efficiency of processing ascaplots and conducting research in the relevant area.
{"title":"Программное обеспечение для автоматизированного распознавания и оцифровки архивных данных оптических наблюдений полярных сияний","authors":"Andrei Vorobev, Alexander Lapin, Gulnara Vorobeva","doi":"10.15622/ia.22.5.8","DOIUrl":"https://doi.org/10.15622/ia.22.5.8","url":null,"abstract":"One of the main tools for recording auroras is the optical observation of the sky in automatic mode using all-sky cameras. The results of observations are recorded in special mnemonic tables, ascaplots. Ascaplots provide daily information on the presence or absence of cloud cover and auroras in various parts of the sky and are traditionally used to study the daily distribution of auroras in a given spatial region, as well as to calculate the probability of their observation in other regions in accordance with the level of geomagnetic activity. At the same time, the processing of ascaplots is currently carried out manually, which is associated with significant time costs and a high proportion of errors due to the human factor. To increase the efficiency of ascaplot processing, we propose an approach that automates the recognition and digitization of data from optical observations of auroras. A formalization of the ascaplot structure is proposed, which is used to process the ascaplot image, extract the corresponding observation results, and form the resulting data set. The approach involves the use of machine vision algorithms and the use of a specialized mask - a debug image for digitization, which is a color image in which the general position of the ascaplot cells is specified. The proposed approach and the corresponding algorithms are implemented in the form of software that provides recognition and digitization of archival data from optical observations of auroras. The solution is a single-user desktop software that allows the user to convert ascaplot images into tables in batch mode, available for further processing and analysis. The results of the computational experiments have shown that the use of the proposed software will make it possible to avoid errors in the digitization of ascaplots, on the one hand, and significantly increase the speed of the corresponding computational operations, on the other. Taken together, this will improve the efficiency of processing ascaplots and conducting research in the relevant area.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article is devoted to the original mathematical models of combat operations developed in Russia at the beginning of the XX century. One of the first works in which approaches to mathematical modeling of military operations were outlined can be considered an article by Y. Karpov «Tactics of fortress artillery», published in 1906. It considered the task of defending the fortress from attacking enemy infantry chains. Based on the idea of the attackers overcoming the line of defense, mathematical relations were obtained linking the parameters of the shot of the shrapnel charge with the movements of the infantryman. Similarly, the task of using a machine gun for the defense of the fortress was considered. After analyzing the obtained ratios, Y. Karpov came to the conclusion that all means of defense of the fortress can be correlated through the length of the area defended by this means. P. Nikitin developed Y. Karpov's ideas. He considered a wide range of means of destruction. Based on the results of the research, the author made recommendations on the distribution of forces and means in the defense of fortresses. M. Osipov in 1915 published vivid and original models of two-way combat operations, a year earlier than the well-known Lanchester theory. Summing up the numbers of the fighting sides at infinitesimal intervals of time, and then moving to the limits, he obtains linear and quadratic laws of the influence of the ratio of the number of the fighting sides on their losses, and explores heterogeneous means of destruction. All this is verified by the practice of various battles. M. Osipov showed that the coefficients in the laws of losses depend on the training of personnel, terrain, the presence of fortifications, the moral and psychological state of the troops, etc. Based on the results of mathematical modeling, M. Osipov for the first time substantiated a number of provisions of the art of war. He showed that neither linear nor quadratic laws of losses in general do not correspond to the practice of the battles conducted. For ease of use at that level of computer technology development and to obtain a more reliable result, M. Osipov proposed using the degree of "three second" in the laws of losses, although he himself understood its approximate nature. Much attention is paid to the problem of authorship, the search for a prototype of the creator of the first two-sided model of combat operations, and the application of theory to solve modern applied problems.
{"title":"From the History of Mathematical Modeling Military Operations in Russia (1900-1917)","authors":"Rafael Yusupov, Vladimir Ivanov","doi":"10.15622/ia.22.5.1","DOIUrl":"https://doi.org/10.15622/ia.22.5.1","url":null,"abstract":"The article is devoted to the original mathematical models of combat operations developed in Russia at the beginning of the XX century. One of the first works in which approaches to mathematical modeling of military operations were outlined can be considered an article by Y. Karpov «Tactics of fortress artillery», published in 1906. It considered the task of defending the fortress from attacking enemy infantry chains. Based on the idea of the attackers overcoming the line of defense, mathematical relations were obtained linking the parameters of the shot of the shrapnel charge with the movements of the infantryman. Similarly, the task of using a machine gun for the defense of the fortress was considered. After analyzing the obtained ratios, Y. Karpov came to the conclusion that all means of defense of the fortress can be correlated through the length of the area defended by this means. P. Nikitin developed Y. Karpov's ideas. He considered a wide range of means of destruction. Based on the results of the research, the author made recommendations on the distribution of forces and means in the defense of fortresses. M. Osipov in 1915 published vivid and original models of two-way combat operations, a year earlier than the well-known Lanchester theory. Summing up the numbers of the fighting sides at infinitesimal intervals of time, and then moving to the limits, he obtains linear and quadratic laws of the influence of the ratio of the number of the fighting sides on their losses, and explores heterogeneous means of destruction. All this is verified by the practice of various battles. M. Osipov showed that the coefficients in the laws of losses depend on the training of personnel, terrain, the presence of fortifications, the moral and psychological state of the troops, etc. Based on the results of mathematical modeling, M. Osipov for the first time substantiated a number of provisions of the art of war. He showed that neither linear nor quadratic laws of losses in general do not correspond to the practice of the battles conducted. For ease of use at that level of computer technology development and to obtain a more reliable result, M. Osipov proposed using the degree of \"three second\" in the laws of losses, although he himself understood its approximate nature. Much attention is paid to the problem of authorship, the search for a prototype of the creator of the first two-sided model of combat operations, and the application of theory to solve modern applied problems.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2D and 3D digital multimedia files offer numerous benefits like excellent quality, compression, editing, reliable copying, etc. These qualities of the multimedia files, on the other hand, are the cause of fear including the fear of getting access to data during communication. Steganography plays an important role in providing security to the data in communication. Changing the type of cover file from digital multimedia files to protocols improve the security of the communication system. Protocols are an integral part of the communication system and these protocols can also be used to hide secret data resulting in low chances of detection. This paper is intended to help improve existing network steganography techniques by enhancing bandwidth and decreasing detection rates through reviewing previous related work. Recent papers of the last 21 years on network steganography techniques have been studied, analyzed, and summarized. This review can help researchers to understand the existing trends in network steganography techniques to pursue further work in this area for algorithms’ improvement. The paper is divided according to the layers of the OSI model.
{"title":"A Walk-through towards Network Steganography Techniques","authors":"Urmila Pilania, Manoj Kumar, Tanwar Rohit, Neha Nandal","doi":"10.15622/ia.22.5.6","DOIUrl":"https://doi.org/10.15622/ia.22.5.6","url":null,"abstract":"2D and 3D digital multimedia files offer numerous benefits like excellent quality, compression, editing, reliable copying, etc. These qualities of the multimedia files, on the other hand, are the cause of fear including the fear of getting access to data during communication. Steganography plays an important role in providing security to the data in communication. Changing the type of cover file from digital multimedia files to protocols improve the security of the communication system. Protocols are an integral part of the communication system and these protocols can also be used to hide secret data resulting in low chances of detection. This paper is intended to help improve existing network steganography techniques by enhancing bandwidth and decreasing detection rates through reviewing previous related work. Recent papers of the last 21 years on network steganography techniques have been studied, analyzed, and summarized. This review can help researchers to understand the existing trends in network steganography techniques to pursue further work in this area for algorithms’ improvement. The paper is divided according to the layers of the OSI model.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transportation system is one of the most important parts of the country's economy. At the same time, the growth in road traffic has a significant negative impact on the economic performance of the industry. One of the ways to increase the efficiency of using the transportation infrastructure is to manage traffic flows, incl. by controlling traffic signals at signalized intersections. One of the trends in the development of intelligent transportation systems is the creation of vehicular ad hoc networks that allow the exchange of information between vehicles and infrastructure, as well as the development of autonomous vehicles. As a result, it becomes possible to formulate the problem of cooperative control of vehicle trajectories and traffic signals to increase the capacity of intersections and reduce fuel consumption and travel time. This paper presents a method for managing traffic flow at an intersection, which consists of the cooperative control of traffic signals and trajectories of connected/autonomous vehicles. The developed method combines an algorithm for the adaptive control of traffic signals based on a deterministic model for predicting the movement of vehicles and a two-stage algorithm for constructing the trajectory of vehicles. The objective optimization function used to construct the optimal trajectories takes into account fuel consumption, travel time on the road lane, and waiting time at the intersection. Experimental studies of the developed method were carried out in the microscopic traffic simulation package SUMO using three simulation scenarios, including two synthetic scenarios and a scenario in a real urban environment. The results of experimental studies confirm the effectiveness of the developed method in terms of fuel consumption, travel time, and waiting time in comparison with the adaptive traffic signal control algorithm.
{"title":"Cooperative Control of Traffic Signals and Vehicle Trajectories","authors":"Anton Agafonov, Alexander Yumaganov","doi":"10.15622/ia.22.1.1","DOIUrl":"https://doi.org/10.15622/ia.22.1.1","url":null,"abstract":"The transportation system is one of the most important parts of the country's economy. At the same time, the growth in road traffic has a significant negative impact on the economic performance of the industry. One of the ways to increase the efficiency of using the transportation infrastructure is to manage traffic flows, incl. by controlling traffic signals at signalized intersections. One of the trends in the development of intelligent transportation systems is the creation of vehicular ad hoc networks that allow the exchange of information between vehicles and infrastructure, as well as the development of autonomous vehicles. As a result, it becomes possible to formulate the problem of cooperative control of vehicle trajectories and traffic signals to increase the capacity of intersections and reduce fuel consumption and travel time. This paper presents a method for managing traffic flow at an intersection, which consists of the cooperative control of traffic signals and trajectories of connected/autonomous vehicles. The developed method combines an algorithm for the adaptive control of traffic signals based on a deterministic model for predicting the movement of vehicles and a two-stage algorithm for constructing the trajectory of vehicles. The objective optimization function used to construct the optimal trajectories takes into account fuel consumption, travel time on the road lane, and waiting time at the intersection. Experimental studies of the developed method were carried out in the microscopic traffic simulation package SUMO using three simulation scenarios, including two synthetic scenarios and a scenario in a real urban environment. The results of experimental studies confirm the effectiveness of the developed method in terms of fuel consumption, travel time, and waiting time in comparison with the adaptive traffic signal control algorithm.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135839519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}