The emergency braking processes in the European Train Control System (ETCS) of high-speed trains are associated with stepwise regulation of acceleration (deceleration) depending on the braking ability of the train, terrain data and changing weather on the route. These processes are defined in ETCS. The procedure for stepwise regulation of deceleration is carried out by the driver repeatedly in the process of braking until the train stops completely. The beginning of emergency braking and its end, as well as the braking process itself, is accompanied by repeated pulsed operation of the brakes, which leads to jumps in deceleration and, accordingly, to increased wear of the brake system, a decrease in comfort for passengers, which results in the limitation of the maximum allowable speed. The article proposes a new concept and technique for constructing mathematical models of emergency braking curves different from ETCS curves and based on harmonic half-waves. It is shown that the ETCS deceleration curves are described by known second-order power half-waves. Their joint study gives grounds to assert that the application of these curves leads to the obligatory pulsed mode of brake operation. Two new variants of models of emergency braking curves described by harmonic half-waves are proposed. The first option has one pulsed brake application at the end of the braking interval. The second option is free from braking impulses and allows the use of continuous regulation. These models explain the features of ETCS, contain proposals for their elimination, and are applicable to the development of new emergency braking curves that allow smooth control of emergency braking of trains. Efficiency, differences and advantages over ETCS braking curves are shown on the results of mathematical modeling of emergency braking processes.
{"title":"Применение гармонических полуволн для автоматизации управления высокоскоростными поездами","authors":"Boris Mayorov","doi":"10.15622/ia.22.6.5","DOIUrl":"https://doi.org/10.15622/ia.22.6.5","url":null,"abstract":"The emergency braking processes in the European Train Control System (ETCS) of high-speed trains are associated with stepwise regulation of acceleration (deceleration) depending on the braking ability of the train, terrain data and changing weather on the route. These processes are defined in ETCS. The procedure for stepwise regulation of deceleration is carried out by the driver repeatedly in the process of braking until the train stops completely. The beginning of emergency braking and its end, as well as the braking process itself, is accompanied by repeated pulsed operation of the brakes, which leads to jumps in deceleration and, accordingly, to increased wear of the brake system, a decrease in comfort for passengers, which results in the limitation of the maximum allowable speed. The article proposes a new concept and technique for constructing mathematical models of emergency braking curves different from ETCS curves and based on harmonic half-waves. It is shown that the ETCS deceleration curves are described by known second-order power half-waves. Their joint study gives grounds to assert that the application of these curves leads to the obligatory pulsed mode of brake operation. Two new variants of models of emergency braking curves described by harmonic half-waves are proposed. The first option has one pulsed brake application at the end of the braking interval. The second option is free from braking impulses and allows the use of continuous regulation. These models explain the features of ETCS, contain proposals for their elimination, and are applicable to the development of new emergency braking curves that allow smooth control of emergency braking of trains. Efficiency, differences and advantages over ETCS braking curves are shown on the results of mathematical modeling of emergency braking processes.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic trajectory planning is an urgent scientific and technical problem, whose solutions are in demand in many fields: unmanned transportation, robotic logistics, social robotics, etc. Often, when planning a trajectory, it is necessary to consider the fact that the agent (robot, unmanned car, etc.) cannot arbitrarily change its orientation while moving, in other words, it is necessary to consider kinematic constraints when planning. One widespread approach to solving this problem is the approach that relies on the construction of a trajectory from prepared parts, motion primitives, each of which satisfies kinematic constraints. Often, the emphasis in the development of methods implementing this approach is on reducing the combinations of choices in planning (heuristic search), with the set of available primitives itself being regarded as externally defined. In this paper, on the contrary, we aim to investigate and analyze the effect of different available motion primitives on the quality of solving the planning problem with a fixed search algorithm. Specifically, we consider 3 different sets of motion primitives for a wheeled robot with differential drive. As a search algorithm, the A* algorithm well known in artificial intelligence and robotics is used. The solution quality is evaluated by 6 metrics, including planning time, length and curvature of the resulting trajectory. Based on the study, conclusions are made about the factors that have the strongest influence on the planning result, and recommendations are given on the construction of motion primitives, the use of which allows to achieve a balance between the speed of the planning algorithm and the quality of the trajectories found.
{"title":"Примитивы движения робота в задаче планирования траектории с кинематическими ограничениями","authors":"Vladislav Golovin, Konstantin Yakovlev","doi":"10.15622/ia.22.6.4","DOIUrl":"https://doi.org/10.15622/ia.22.6.4","url":null,"abstract":"Automatic trajectory planning is an urgent scientific and technical problem, whose solutions are in demand in many fields: unmanned transportation, robotic logistics, social robotics, etc. Often, when planning a trajectory, it is necessary to consider the fact that the agent (robot, unmanned car, etc.) cannot arbitrarily change its orientation while moving, in other words, it is necessary to consider kinematic constraints when planning. One widespread approach to solving this problem is the approach that relies on the construction of a trajectory from prepared parts, motion primitives, each of which satisfies kinematic constraints. Often, the emphasis in the development of methods implementing this approach is on reducing the combinations of choices in planning (heuristic search), with the set of available primitives itself being regarded as externally defined. In this paper, on the contrary, we aim to investigate and analyze the effect of different available motion primitives on the quality of solving the planning problem with a fixed search algorithm. Specifically, we consider 3 different sets of motion primitives for a wheeled robot with differential drive. As a search algorithm, the A* algorithm well known in artificial intelligence and robotics is used. The solution quality is evaluated by 6 metrics, including planning time, length and curvature of the resulting trajectory. Based on the study, conclusions are made about the factors that have the strongest influence on the planning result, and recommendations are given on the construction of motion primitives, the use of which allows to achieve a balance between the speed of the planning algorithm and the quality of the trajectories found.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A system of nonlinear discrete (finite-difference) of a general form with a bounded delay is considered. Interest in the tasks of qualitative analysis of such systems has increased significantly in recent years. At the same time, the problem of stability with respect to all variables of the zero equilibrium position, which has a great generality, is mainly analyzed in domestic and foreign literature. The main research method is a discrete-functional analogue of the direct Lyapunov method. In this article, it is assumed that the system under consideration admits a “partial” (in some part of the state variables) zero equilibrium position. The problem of stability of a given equilibrium position is posed, and stability is considered not in all, but only in relation to a part of the variables that determine this equilibrium position. Such a problem belongs to the class of problems of partial stability, which are actively studied for systems of various forms of mathematical description. The proposed statement of the problem complements the scope of the indicated studies in relation to the system under consideration. To solve this problem, a discrete version of the Lyapunov– Krasovskii functionals method is used in the space of discrete functions with appropriate specification of the functional requirements. To expand the capabilities of this method, it is proposed to use two types of additional auxiliary (vector, generally speaking) discrete functions in order to: 1) adjustments of the phase space region of the system in which the Lyapunov–Krasovskii functional is constructed; 2) finding the necessary estimates of the functionals and their differences (increment) due to the system under consideration, on the basis of which conclusions about partial stability are made. The expediency of this approach lies in the fact that as a result, the Lyapunov-Krasovskii functional, as well as its difference due to the system under consideration, can be alternating in the domain that is usually considered when analyzing partial stability. Sufficient conditions of partial stability, partial uniform stability, and partial uniform asymptotic stability of the specified type are obtained. The features of the proposed approach are shown on the example of two classes of nonlinear systems of a given structure, for which partial stability is analyzed in parameter space. Attention is drawn to the expediency of using a one-parameter family of functionals.
{"title":"On the Partial Stability of Nonlinear Discrete-Time Systems with Delay","authors":"Vladimir Vorotnikov","doi":"10.15622/ia.22.6.7","DOIUrl":"https://doi.org/10.15622/ia.22.6.7","url":null,"abstract":"A system of nonlinear discrete (finite-difference) of a general form with a bounded delay is considered. Interest in the tasks of qualitative analysis of such systems has increased significantly in recent years. At the same time, the problem of stability with respect to all variables of the zero equilibrium position, which has a great generality, is mainly analyzed in domestic and foreign literature. The main research method is a discrete-functional analogue of the direct Lyapunov method. In this article, it is assumed that the system under consideration admits a “partial” (in some part of the state variables) zero equilibrium position. The problem of stability of a given equilibrium position is posed, and stability is considered not in all, but only in relation to a part of the variables that determine this equilibrium position. Such a problem belongs to the class of problems of partial stability, which are actively studied for systems of various forms of mathematical description. The proposed statement of the problem complements the scope of the indicated studies in relation to the system under consideration. To solve this problem, a discrete version of the Lyapunov– Krasovskii functionals method is used in the space of discrete functions with appropriate specification of the functional requirements. To expand the capabilities of this method, it is proposed to use two types of additional auxiliary (vector, generally speaking) discrete functions in order to: 1) adjustments of the phase space region of the system in which the Lyapunov–Krasovskii functional is constructed; 2) finding the necessary estimates of the functionals and their differences (increment) due to the system under consideration, on the basis of which conclusions about partial stability are made. The expediency of this approach lies in the fact that as a result, the Lyapunov-Krasovskii functional, as well as its difference due to the system under consideration, can be alternating in the domain that is usually considered when analyzing partial stability. Sufficient conditions of partial stability, partial uniform stability, and partial uniform asymptotic stability of the specified type are obtained. The features of the proposed approach are shown on the example of two classes of nonlinear systems of a given structure, for which partial stability is analyzed in parameter space. Attention is drawn to the expediency of using a one-parameter family of functionals.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario José Diván, Dmitry Shchemelinin, Marcos E. Carranza, Cesar Ignacio Martinez-Spessot, Mikhail Buinevich
Scenario: System reliability monitoring focuses on determining the level at which the system works as expected (under certain conditions and over time) based on requirements. The edge computing environment is heterogeneous and distributed. It may lack central control due to the scope, number, and volume of stakeholders. Objective: To identify and characterize the Real-time System Reliability Monitoring strategies that have considered Artificial Intelligence models for supporting decision-making processes. Methodology: An analysis based on the Systematic Mapping Study was performed on December 14, 2022. The IEEE and Scopus databases were considered in the exploration. Results: 50 articles addressing the subject between 2013 and 2022 with growing interest. The core use of this technology is related to networking and health areas, articulating Body sensor networks or data policies management (collecting, routing, transmission, and workload management) with edge computing. Conclusions: Real-time Reliability Monitoring in edge computing is ongoing and still nascent. It lacks standards but has taken importance and interest in the last two years. Most articles focused on Push-based data collection methods for supporting centralized decision-making strategies. Additionally, to networking and health, it concentrated and deployed on industrial and environmental monitoring. However, there are multiple opportunities and paths to walk to improve it. E.g., data interoperability, federated and collaborative decision-making models, formalization of the experimental design for measurement process, data sovereignty, organizational memory to capitalize previous knowledge (and experiences), calibration and recalibration strategies for data sources.
{"title":"Real-Time Reliability Monitoring on Edge Computing: a Systematic Mapping","authors":"Mario José Diván, Dmitry Shchemelinin, Marcos E. Carranza, Cesar Ignacio Martinez-Spessot, Mikhail Buinevich","doi":"10.15622/ia.22.6.1","DOIUrl":"https://doi.org/10.15622/ia.22.6.1","url":null,"abstract":"Scenario: System reliability monitoring focuses on determining the level at which the system works as expected (under certain conditions and over time) based on requirements. The edge computing environment is heterogeneous and distributed. It may lack central control due to the scope, number, and volume of stakeholders. Objective: To identify and characterize the Real-time System Reliability Monitoring strategies that have considered Artificial Intelligence models for supporting decision-making processes. Methodology: An analysis based on the Systematic Mapping Study was performed on December 14, 2022. The IEEE and Scopus databases were considered in the exploration. Results: 50 articles addressing the subject between 2013 and 2022 with growing interest. The core use of this technology is related to networking and health areas, articulating Body sensor networks or data policies management (collecting, routing, transmission, and workload management) with edge computing. Conclusions: Real-time Reliability Monitoring in edge computing is ongoing and still nascent. It lacks standards but has taken importance and interest in the last two years. Most articles focused on Push-based data collection methods for supporting centralized decision-making strategies. Additionally, to networking and health, it concentrated and deployed on industrial and environmental monitoring. However, there are multiple opportunities and paths to walk to improve it. E.g., data interoperability, federated and collaborative decision-making models, formalization of the experimental design for measurement process, data sovereignty, organizational memory to capitalize previous knowledge (and experiences), calibration and recalibration strategies for data sources.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The possibility and expediency of forecasting in the stock markets are analyzed analytically using the methods and approaches of statistical mechanics. The apparatus of statistical mechanics is used to analyze and forecast one of the most important indicators of the market – the distribution of its logarithmic profitability. The Lotka-Volterra model used in ecology to describe systems of the "predator-prey" type was used as the initial model. It approximates market dynamics adequately. In the article, its Hamiltonian property is used, which makes it possible to apply the apparatus of statistical mechanics. The apparatus of statistical mechanics (using the principle of maximum entropy) makes it possible to implement a probabilistic approach that is adapted to the conditions of stock market uncertainty. The canonical variables of the Hamiltonian are presented as logarithms of stock and bond prices, the joint probability distribution function of stock and bond prices is obtained as a Gibbs distribution. The Boltzmann factor, included in the Gibbs distribution, allows us to estimate the probability of the occurrence of certain stock and bond prices and obtain an analytical expression for calculating the logarithmic return, which gives more accurate results than the widely used normal (Gaussian) distribution. According to its characteristics, the resulting distribution resembles the Laplace distribution. The main characteristics of the resulting distribution are calculated – the mean value, variance, asymmetry, and kurtosis. Mathematical results are presented graphically. An explanation is given of the cause-and-effect mechanism that causes a change in the profitability of the market. For this, the idea of Theodore Modis about the competition between stocks and bonds for the attention and money of investors is developed (by analogy with the turnover of biomass in models of the "predator-prey" type in biology). The results of the study are of interest to investors, theorists, and practitioners of the stock market. They allow us to make thoughtful and balanced investment decisions due to a more realistic idea of the expected return and a more adequate assessment of investment risk.
{"title":"Forecasting in Stock Markets Using the Formalism of Statistical Mechanics","authors":"Yuriy Bibik","doi":"10.15622/ia.22.6.9","DOIUrl":"https://doi.org/10.15622/ia.22.6.9","url":null,"abstract":"The possibility and expediency of forecasting in the stock markets are analyzed analytically using the methods and approaches of statistical mechanics. The apparatus of statistical mechanics is used to analyze and forecast one of the most important indicators of the market – the distribution of its logarithmic profitability. The Lotka-Volterra model used in ecology to describe systems of the \"predator-prey\" type was used as the initial model. It approximates market dynamics adequately. In the article, its Hamiltonian property is used, which makes it possible to apply the apparatus of statistical mechanics. The apparatus of statistical mechanics (using the principle of maximum entropy) makes it possible to implement a probabilistic approach that is adapted to the conditions of stock market uncertainty. The canonical variables of the Hamiltonian are presented as logarithms of stock and bond prices, the joint probability distribution function of stock and bond prices is obtained as a Gibbs distribution. The Boltzmann factor, included in the Gibbs distribution, allows us to estimate the probability of the occurrence of certain stock and bond prices and obtain an analytical expression for calculating the logarithmic return, which gives more accurate results than the widely used normal (Gaussian) distribution. According to its characteristics, the resulting distribution resembles the Laplace distribution. The main characteristics of the resulting distribution are calculated – the mean value, variance, asymmetry, and kurtosis. Mathematical results are presented graphically. An explanation is given of the cause-and-effect mechanism that causes a change in the profitability of the market. For this, the idea of Theodore Modis about the competition between stocks and bonds for the attention and money of investors is developed (by analogy with the turnover of biomass in models of the \"predator-prey\" type in biology). The results of the study are of interest to investors, theorists, and practitioners of the stock market. They allow us to make thoughtful and balanced investment decisions due to a more realistic idea of the expected return and a more adequate assessment of investment risk.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" 410","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135185985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexey Stepanov, Elizaveta Fomina, Lyubov Illarionova, Konstantin Dubrovin, Denis Fedoseev
Approximation of the series of the seasonal vegetation index time series is the basis for monitoring agricultural crops, their identification and cropland classification. For cropland of the Khabarovsk Territory in the period from May to October 2021, NDVI and EVI time series were constructed using Sentinel-2A (20 m) multispectral images using a cloud mask. Five functions were used to approximate time series: Gaussian function; double Gaussian; double sine wave; Fourier series; double logistic. Characteristics of extremums for approximated time series for different types of arable land were built and calculated: buckwheat, perennial grasses, soybeans, fallow and ley. It was shown that each type requires a characteristic species. It was found (p<0.05) that Fourier approximation showed the highest accuracy for NDVI and EVI series (average error, respectively, 8.5% and 16.0%). Approximation of the NDVI series using a double sine, double Gaussian and double logistic function resulted in an error increase of 8.9-10.6%. Approximation of EVI series based on double Gaussian and double sine wave causes an increase in average errors up to 18.3-18.5%. The conducted a posteriori analysis using the Tukey criterion showed that for soybean, fallow and ley lands, it is better to use the Fourier series, double Gaussian or double sine wave to approximate vegetation indices, for buckwheat it is advisable to use the Fourier series or double Gaussian. In general, the average approximation error of the NDVI seasonal time series is 1.5-4 times less than the approximation error of the EVI series.
{"title":"Аппроксимация временных рядов индексов вегетации (NDVI и EVI) для мониторинга сельхозкультур (посевов) Хабаровского края","authors":"Alexey Stepanov, Elizaveta Fomina, Lyubov Illarionova, Konstantin Dubrovin, Denis Fedoseev","doi":"10.15622/ia.22.6.8","DOIUrl":"https://doi.org/10.15622/ia.22.6.8","url":null,"abstract":"Approximation of the series of the seasonal vegetation index time series is the basis for monitoring agricultural crops, their identification and cropland classification. For cropland of the Khabarovsk Territory in the period from May to October 2021, NDVI and EVI time series were constructed using Sentinel-2A (20 m) multispectral images using a cloud mask. Five functions were used to approximate time series: Gaussian function; double Gaussian; double sine wave; Fourier series; double logistic. Characteristics of extremums for approximated time series for different types of arable land were built and calculated: buckwheat, perennial grasses, soybeans, fallow and ley. It was shown that each type requires a characteristic species. It was found (p<0.05) that Fourier approximation showed the highest accuracy for NDVI and EVI series (average error, respectively, 8.5% and 16.0%). Approximation of the NDVI series using a double sine, double Gaussian and double logistic function resulted in an error increase of 8.9-10.6%. Approximation of EVI series based on double Gaussian and double sine wave causes an increase in average errors up to 18.3-18.5%. The conducted a posteriori analysis using the Tukey criterion showed that for soybean, fallow and ley lands, it is better to use the Fourier series, double Gaussian or double sine wave to approximate vegetation indices, for buckwheat it is advisable to use the Fourier series or double Gaussian. In general, the average approximation error of the NDVI seasonal time series is 1.5-4 times less than the approximation error of the EVI series.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":" July","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135186272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boris Sokolov, Dmitry Verzilin, Tatyana Maximova, Min Zhang
To date, there is a generally accepted idea of intellectual capital, and approaches have been developed to measure it at the micro and macro levels. Methods of patent analytics for the analysis of technological trends have been developed. At the conceptual level, it is known that there is a mutual influence of intellectual capital and technological trends, but there are no methodological developments for quantifying such influence using data from various sources. The purpose of the study was to quantify the mutual influence of national intellectual capital and modern management information technologies at the macro level. The mathematical foundations for the distinction of the components of intellectual capital and technologies were considered. The hypothesis about the statistical significance of the mutual influence of intellectual capital and management information technologies was confirmed. The dependence was approximated by linear regression of the intellectual capital index on the logarithm of the country's patent activity index in the field of IT management methods, which can be interpreted as a slowdown in the growth of the intellectual capital index when a certain level of patent activity is reached. It has been established that the more developed the economy, the higher the level of intellectual capital and the higher level of dissemination of IT management methods. China and India are clear exceptions to this pattern. China, which is an upper-middle-income country, demonstrates higher than the countries of its level of economic development, interconnected values of the index of intellectual capital, and the prevalence of IT-management methods. India, ranked 3rd among lower-middle-income countries, has commensurate rates of development of intellectual capital and the spread of IT-management methods with upper-middle-income countries. Further research may be related to testing hypotheses about quantitative relationships between intellectual capital and technological development via the proposed method. It is necessary to detail the identified dependencies by IPC codes and components of intellectual capital and identify dependencies for other technological areas.
{"title":"Взаимное влияние интеллектуального капитала и информационных технологий управления","authors":"Boris Sokolov, Dmitry Verzilin, Tatyana Maximova, Min Zhang","doi":"10.15622/ia.22.5.2","DOIUrl":"https://doi.org/10.15622/ia.22.5.2","url":null,"abstract":"To date, there is a generally accepted idea of intellectual capital, and approaches have been developed to measure it at the micro and macro levels. Methods of patent analytics for the analysis of technological trends have been developed. At the conceptual level, it is known that there is a mutual influence of intellectual capital and technological trends, but there are no methodological developments for quantifying such influence using data from various sources. The purpose of the study was to quantify the mutual influence of national intellectual capital and modern management information technologies at the macro level. The mathematical foundations for the distinction of the components of intellectual capital and technologies were considered. The hypothesis about the statistical significance of the mutual influence of intellectual capital and management information technologies was confirmed. The dependence was approximated by linear regression of the intellectual capital index on the logarithm of the country's patent activity index in the field of IT management methods, which can be interpreted as a slowdown in the growth of the intellectual capital index when a certain level of patent activity is reached. It has been established that the more developed the economy, the higher the level of intellectual capital and the higher level of dissemination of IT management methods. China and India are clear exceptions to this pattern. China, which is an upper-middle-income country, demonstrates higher than the countries of its level of economic development, interconnected values of the index of intellectual capital, and the prevalence of IT-management methods. India, ranked 3rd among lower-middle-income countries, has commensurate rates of development of intellectual capital and the spread of IT-management methods with upper-middle-income countries. Further research may be related to testing hypotheses about quantitative relationships between intellectual capital and technological development via the proposed method. It is necessary to detail the identified dependencies by IPC codes and components of intellectual capital and identify dependencies for other technological areas.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, different attempts have been made to characterize information security threats, particularly in the industrial sector. Yet, there have been a number of mysterious threats that could jeopardize the safety of food processing industry data, information, and resources. This research paper aims to increase the efficiency of information security risk analysis in food processing industrial information systems, and the participants in this study were experts in executive management, regular staff, technical and asset operators, third-party consultancy companies, and risk management professionals from the food processing sector in Sub-Saharan Africa. A questionnaire and interview with a variety of questions using qualitative and quantitative risk analysis approaches were used to gather the risk identifications, and the fuzzy inference system method was also applied to analyze the risk factor in this paper. The findings revealed that among information security concerns, electronic data in a data theft threat has a high-risk outcome of 75.67%, and human resource management (HRM) in a social engineering threat has a low-risk impact of 26.67%. Thus, the high-probability risk factors need quick action, and the risk components with a high probability call for rapid corrective action. Finally, the root causes of such threats should be identified and controlled before experiencing detrimental effects. It's also important to note that primary interests and worldwide policies must be taken into consideration while examining information security in food processing industrial information systems.
{"title":"Анализ рисков информационной безопасности в пищевой промышленности с использованием системы нечеткого вывода","authors":"Amanuel Asfha, Abhishek Vaish","doi":"10.15622/ia.22.5.5","DOIUrl":"https://doi.org/10.15622/ia.22.5.5","url":null,"abstract":"Recently, different attempts have been made to characterize information security threats, particularly in the industrial sector. Yet, there have been a number of mysterious threats that could jeopardize the safety of food processing industry data, information, and resources. This research paper aims to increase the efficiency of information security risk analysis in food processing industrial information systems, and the participants in this study were experts in executive management, regular staff, technical and asset operators, third-party consultancy companies, and risk management professionals from the food processing sector in Sub-Saharan Africa. A questionnaire and interview with a variety of questions using qualitative and quantitative risk analysis approaches were used to gather the risk identifications, and the fuzzy inference system method was also applied to analyze the risk factor in this paper. The findings revealed that among information security concerns, electronic data in a data theft threat has a high-risk outcome of 75.67%, and human resource management (HRM) in a social engineering threat has a low-risk impact of 26.67%. Thus, the high-probability risk factors need quick action, and the risk components with a high probability call for rapid corrective action. Finally, the root causes of such threats should be identified and controlled before experiencing detrimental effects. It's also important to note that primary interests and worldwide policies must be taken into consideration while examining information security in food processing industrial information systems.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Makar Pelogeiko, Stanislav Sartasov, Oleg Granichin
Extending smartphone working time is an ongoing endeavour becoming more and more important with each passing year. It could be achieved by more advanced hardware or by introducing energy-aware practices to software, and the latter is a more accessible approach. As the CPU is one of the most power-hungry smartphone devices, Dynamic Voltage Frequency Scaling (DVFS) is a technique to adjust CPU frequency to the current computational needs, and different algorithms were already developed, both energy-aware and energy-agnostic kinds. Following our previous work on the subject, we propose a novel DVFS approach to use simultaneous perturbation stochastic approximation (SPSA) with two noisy observations for tracking the optimal frequency and implementing several algorithms based on it. Moreover, we also address an issue of hardware lag between a signal for the CPU to change frequency and its actual update. As Android OS could use a default task scheduler or an energy-aware one, which is capable of taking advantage of heterogeneous mobile CPU architectures such as ARM big.LITTLE, we also explore an integration scheme between the proposed algorithms and OS schedulers. A model-based testing methodology to compare the developed algorithms against existing ones is presented, and a test suite reflecting real-world use case scenarios is outlined. Our experiments show that the SPSA-based algorithm works well with EAS with a simplified integration scheme, showing CPU performance comparable to other energy-aware DVFS algorithms and a decreased energy consumption.
{"title":"On Stochastic Optimization for Smartphone CPU Energy Consumption Decrease","authors":"Makar Pelogeiko, Stanislav Sartasov, Oleg Granichin","doi":"10.15622/ia.22.5.3","DOIUrl":"https://doi.org/10.15622/ia.22.5.3","url":null,"abstract":"Extending smartphone working time is an ongoing endeavour becoming more and more important with each passing year. It could be achieved by more advanced hardware or by introducing energy-aware practices to software, and the latter is a more accessible approach. As the CPU is one of the most power-hungry smartphone devices, Dynamic Voltage Frequency Scaling (DVFS) is a technique to adjust CPU frequency to the current computational needs, and different algorithms were already developed, both energy-aware and energy-agnostic kinds. Following our previous work on the subject, we propose a novel DVFS approach to use simultaneous perturbation stochastic approximation (SPSA) with two noisy observations for tracking the optimal frequency and implementing several algorithms based on it. Moreover, we also address an issue of hardware lag between a signal for the CPU to change frequency and its actual update. As Android OS could use a default task scheduler or an energy-aware one, which is capable of taking advantage of heterogeneous mobile CPU architectures such as ARM big.LITTLE, we also explore an integration scheme between the proposed algorithms and OS schedulers. A model-based testing methodology to compare the developed algorithms against existing ones is presented, and a test suite reflecting real-world use case scenarios is outlined. Our experiments show that the SPSA-based algorithm works well with EAS with a simplified integration scheme, showing CPU performance comparable to other energy-aware DVFS algorithms and a decreased energy consumption.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135865018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evgenia Novikova, Elena Fedorchenko, Igor Kotenko, Ivan Kholod
To provide an accurate and timely response to different types of attacks, intrusion detection systems collect and analyze a large amount of data, which may include information with limited access, such as personal data or trade secrets. Consequently, such systems can be seen as an additional source of risks associated with handling sensitive information and breaching its security. Applying the federated learning paradigm to build analytical models for attack and anomaly detection can significantly reduce such risks because locally generated data is not transmitted to any third party, and model training is done locally - on the data sources. Using federated training for intrusion detection solves the problem of training on data that belongs to different organizations, and which, due to the need to protect commercial or other secrets, cannot be placed in the public domain. Thus, this approach also allows us to expand and diversify the set of data on which machine learning models are trained, thereby increasing the level of detectability of heterogeneous attacks. Due to the fact that this approach can overcome the aforementioned problems, it is actively used to design new approaches for intrusion and anomaly detection. The authors systematically explore existing solutions for intrusion and anomaly detection based on federated learning, study their advantages, and formulate open challenges associated with its application in practice. Particular attention is paid to the architecture of the proposed systems, the intrusion detection methods and models used, and approaches for modeling interactions between multiple system users and distributing data among them are discussed. The authors conclude by formulating open problems that need to be solved in order to apply federated learning-based intrusion detection systems in practice.
{"title":"Аналитический обзор подходов к обнаружению вторжений, основанных на федеративном обучении: преимущества использования и открытые задачи","authors":"Evgenia Novikova, Elena Fedorchenko, Igor Kotenko, Ivan Kholod","doi":"10.15622/ia.22.5.4","DOIUrl":"https://doi.org/10.15622/ia.22.5.4","url":null,"abstract":"To provide an accurate and timely response to different types of attacks, intrusion detection systems collect and analyze a large amount of data, which may include information with limited access, such as personal data or trade secrets. Consequently, such systems can be seen as an additional source of risks associated with handling sensitive information and breaching its security. Applying the federated learning paradigm to build analytical models for attack and anomaly detection can significantly reduce such risks because locally generated data is not transmitted to any third party, and model training is done locally - on the data sources. Using federated training for intrusion detection solves the problem of training on data that belongs to different organizations, and which, due to the need to protect commercial or other secrets, cannot be placed in the public domain. Thus, this approach also allows us to expand and diversify the set of data on which machine learning models are trained, thereby increasing the level of detectability of heterogeneous attacks. Due to the fact that this approach can overcome the aforementioned problems, it is actively used to design new approaches for intrusion and anomaly detection. The authors systematically explore existing solutions for intrusion and anomaly detection based on federated learning, study their advantages, and formulate open challenges associated with its application in practice. Particular attention is paid to the architecture of the proposed systems, the intrusion detection methods and models used, and approaches for modeling interactions between multiple system users and distributing data among them are discussed. The authors conclude by formulating open problems that need to be solved in order to apply federated learning-based intrusion detection systems in practice.","PeriodicalId":491127,"journal":{"name":"Informatika i avtomatizaciâ","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135864065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}