Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-75-86
M. S. Pestin, A. Novikov
Communication network simulators are software designed to model, explore, test and debug network technologies, including wireless decentralized self-organizing networks or ad-hoc networks. They greatly simplify the research, development and optimization of routing protocols in these networks. However, the well-known simulators have a number of disadvantages, including the difficulty of adding custom extensions to ad-hoc network routing protocols, the lack of the necessary network stack, the lack of routing algorithm visualization modes, low performance, and difficulty in debugging communication protocols. The purpose of this work is to create a simulation model of a wireless network that would allow us to explore, debug and evaluate the developed algorithms and routing protocols for ad-hoc networks. At the same time, the requirements for interface ergonomics and the ability to visualize the operation of algorithms, ensure the collection of statistics, and create various scenarios for the operation of the network come to the fore. The article proposes the structure of the simulation model, which includes the modules of the network subscriber, application software, network layer of the OSI data transmission model, radio module, radio transmission environment, statistics collection, visualization and scenario management. To solve the tasks set, the approach of discrete-event modeling was used. To create a simulator of wireless decentralized networks and routing algorithms, a set of classes was developed that implement the modules of the simulation model. Based on the proposed structure, module classes and discrete event simulation algorithm, a software implementation of the simulation model was created using the C++ programming language and the Qt framework. The developed simulation model was used in the course of an experimental study of the effectiveness of the network routing algorithm. The proposed software will simplify the development and debugging of algorithms and routing protocols for ad-hoc networks.
{"title":"Simulation model of wireless ad-hoc network to study algorithms of traffic routing","authors":"M. S. Pestin, A. Novikov","doi":"10.37791/2687-0649-2022-17-4-75-86","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-75-86","url":null,"abstract":"Communication network simulators are software designed to model, explore, test and debug network technologies, including wireless decentralized self-organizing networks or ad-hoc networks. They greatly simplify the research, development and optimization of routing protocols in these networks. However, the well-known simulators have a number of disadvantages, including the difficulty of adding custom extensions to ad-hoc network routing protocols, the lack of the necessary network stack, the lack of routing algorithm visualization modes, low performance, and difficulty in debugging communication protocols. The purpose of this work is to create a simulation model of a wireless network that would allow us to explore, debug and evaluate the developed algorithms and routing protocols for ad-hoc networks. At the same time, the requirements for interface ergonomics and the ability to visualize the operation of algorithms, ensure the collection of statistics, and create various scenarios for the operation of the network come to the fore. The article proposes the structure of the simulation model, which includes the modules of the network subscriber, application software, network layer of the OSI data transmission model, radio module, radio transmission environment, statistics collection, visualization and scenario management. To solve the tasks set, the approach of discrete-event modeling was used. To create a simulator of wireless decentralized networks and routing algorithms, a set of classes was developed that implement the modules of the simulation model. Based on the proposed structure, module classes and discrete event simulation algorithm, a software implementation of the simulation model was created using the C++ programming language and the Qt framework. The developed simulation model was used in the course of an experimental study of the effectiveness of the network routing algorithm. The proposed software will simplify the development and debugging of algorithms and routing protocols for ad-hoc networks.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"10 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82961783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-5-16
A. Kychkin, Oleg V. Gorshkov, Mikhail Kukarkin
The research focuses on the development of applied software systems for automated environmental monitoring. The task of developing and integrating applied software, in particular calculation and analytical models based on machine learning (ML) methods, with an IoT platform of digital eco-monitoring for industrial enterprises is considered. Such a platform is used to create software and hardware systems of CEMS – Continuous Emissions Monitoring System class, designed for continuous monitoring of pollutant emissions into the atmospheric air at production facilities. Use of ML tools integrated with the platform allows to expand significantly the functionality of the existing CEMS, in particular to quickly build new SaaS services for forecasting the dynamics of pollution distribution. Given the high requirements for industrial systems, there is a need to create a specialized software product – an analytical server that implements the management of connected predictive analytical ML models with the required level of service quality, including automatic initialization of new analytical scripts as classes, isolation of individual components, automatic recovery after failures, data security and safety. The paper proposes a scheme of functional and algorithmic interaction between the IoT platform of digital eco- monitoring and the analytical server. The proposed implementation of the analytical server has a hierarchical structure, at the top of which is an application capable of accepting high-level REST requests to initialize calculations in real time. This approach minimizes the impact of one analytical script (class) on another, as well as extending the functionality of the platform in "hot" mode, that is, without stopping or reloading. Results demonstrating automatic initialization and connection of basic ML models for predicting pollutant concentrations are presented.
{"title":"Predictive models integration with an environmental monitoring IoT platform","authors":"A. Kychkin, Oleg V. Gorshkov, Mikhail Kukarkin","doi":"10.37791/2687-0649-2022-17-4-5-16","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-5-16","url":null,"abstract":"The research focuses on the development of applied software systems for automated environmental monitoring. The task of developing and integrating applied software, in particular calculation and analytical models based on machine learning (ML) methods, with an IoT platform of digital eco-monitoring for industrial enterprises is considered. Such a platform is used to create software and hardware systems of CEMS – Continuous Emissions Monitoring System class, designed for continuous monitoring of pollutant emissions into the atmospheric air at production facilities. Use of ML tools integrated with the platform allows to expand significantly the functionality of the existing CEMS, in particular to quickly build new SaaS services for forecasting the dynamics of pollution distribution. Given the high requirements for industrial systems, there is a need to create a specialized software product – an analytical server that implements the management of connected predictive analytical ML models with the required level of service quality, including automatic initialization of new analytical scripts as classes, isolation of individual components, automatic recovery after failures, data security and safety. The paper proposes a scheme of functional and algorithmic interaction between the IoT platform of digital eco- monitoring and the analytical server. The proposed implementation of the analytical server has a hierarchical structure, at the top of which is an application capable of accepting high-level REST requests to initialize calculations in real time. This approach minimizes the impact of one analytical script (class) on another, as well as extending the functionality of the platform in \"hot\" mode, that is, without stopping or reloading. Results demonstrating automatic initialization and connection of basic ML models for predicting pollutant concentrations are presented.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"148 1 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91122000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-17-36
Mikhail V. Zaboev, V. Khalin, G. Chernova, A. Yurkov
For a comprehensive assessment of the management decisions quality, it is necessary to take into account heterogeneous information presented both in numerical form and in natural language expressions. The effective occurs the use of data mining including neural network clustering and fuzzy set theory. The article presents our approach to the use of these methods for evaluating risks and the management decisions quality in Russian higher education on the example of the implementation of the most ambitious Project 5-100 for it. On the example, the expediency of the neural network clustering to assess the possibility of achieving the goals of any such large-scale project has been proved. Clustering the information database used for the analysis, makes it possible to carry out an objective selection of candidate universities-candidates for the right to receive state subsidies, as well as to adjust the composition of the Project participants. Another methods of intellectual analysis – the construction of a complex of fuzzy inference systems, – confirmed the possibility of a quantitative fi evaluating of the project based on the expert verbal estimates of the project. At the same time, the neural network clustering initially illustrated the unattainability of the Project 5-100 goals. The use of a complex of fuzzy inference systems confirmed this statement by the very low quantitative final assessment of the project on the basis of verbal expert opinions.
{"title":"Data mining in the management of the Russian higher school","authors":"Mikhail V. Zaboev, V. Khalin, G. Chernova, A. Yurkov","doi":"10.37791/2687-0649-2022-17-4-17-36","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-17-36","url":null,"abstract":"For a comprehensive assessment of the management decisions quality, it is necessary to take into account heterogeneous information presented both in numerical form and in natural language expressions. The effective occurs the use of data mining including neural network clustering and fuzzy set theory. The article presents our approach to the use of these methods for evaluating risks and the management decisions quality in Russian higher education on the example of the implementation of the most ambitious Project 5-100 for it. On the example, the expediency of the neural network clustering to assess the possibility of achieving the goals of any such large-scale project has been proved. Clustering the information database used for the analysis, makes it possible to carry out an objective selection of candidate universities-candidates for the right to receive state subsidies, as well as to adjust the composition of the Project participants. Another methods of intellectual analysis – the construction of a complex of fuzzy inference systems, – confirmed the possibility of a quantitative fi evaluating of the project based on the expert verbal estimates of the project. At the same time, the neural network clustering initially illustrated the unattainability of the Project 5-100 goals. The use of a complex of fuzzy inference systems confirmed this statement by the very low quantitative final assessment of the project on the basis of verbal expert opinions.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"6 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79260367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-113-126
V. Borisov, A. M. Sokolov, A. P. Zharkov, Oleg P. Kultygin
Nowadays the introduction of robotic systems is one of the most common forms of the technological operations automation in various spheres of human activity. Among the robotic systems a special place is occupied by sequential multi-link robotic manipulators (SRM). SRM have become widespread due to relatively small dimensions and high maneuverability, which makes their use indispensable to solve various tasks. In practice, the effectiveness of the functioning of the SRM can be influenced by various types of external environment fuzzy factors. Among the external factors there is a group affecting the ability to determine the exact target position. Such factors often affect technical vision systems. This problem is especially relevant for special purpose mobile robots operating in aggressive environmental conditions. A situation similar to the described one also occurs when a medical robot manipulator is used for minimally invasive surgery, when the role of the control and monitoring system is assumed by an operator. In this regard, the organization of effective control taking into account influence of the external fuzzy factors, that prevent the correct recognition of the target position of the SRM instrument, is an urgent problem. The authors consider the solution of the inverse kinematics problem for SRM based on the use of fuzzy numerical methods, taking into account the possible occurrence of singular configurations in the process of solving.
{"title":"Solving the inverse kinematics problem for sequential robot manipulators based on fuzzy numerical methods","authors":"V. Borisov, A. M. Sokolov, A. P. Zharkov, Oleg P. Kultygin","doi":"10.37791/2687-0649-2022-17-4-113-126","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-113-126","url":null,"abstract":"Nowadays the introduction of robotic systems is one of the most common forms of the technological operations automation in various spheres of human activity. Among the robotic systems a special place is occupied by sequential multi-link robotic manipulators (SRM). SRM have become widespread due to relatively small dimensions and high maneuverability, which makes their use indispensable to solve various tasks. In practice, the effectiveness of the functioning of the SRM can be influenced by various types of external environment fuzzy factors. Among the external factors there is a group affecting the ability to determine the exact target position. Such factors often affect technical vision systems. This problem is especially relevant for special purpose mobile robots operating in aggressive environmental conditions. A situation similar to the described one also occurs when a medical robot manipulator is used for minimally invasive surgery, when the role of the control and monitoring system is assumed by an operator. In this regard, the organization of effective control taking into account influence of the external fuzzy factors, that prevent the correct recognition of the target position of the SRM instrument, is an urgent problem. The authors consider the solution of the inverse kinematics problem for SRM based on the use of fuzzy numerical methods, taking into account the possible occurrence of singular configurations in the process of solving.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"40 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79791580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-87-96
T. A. Shkodina
The article provides a rationale for the relevance of developing a recommender system in the field of e-learning. The main approaches to building a recommender system are analyzed: collaborative, content and hybrid filtering. The main objects of the recommender system for choosing online courses are presented: the student, training modules (online courses), elements of knowledge that the user can receive at the end of the training. In algorithmic support, methods for creating recommender systems, such as machine learning, neural networks, genetic algorithms, are considered. Problems in the methods of building recommender systems have been identified: sparseness; cold start; scalability; searching for elements that are most likely to be preferred by the user from a common set of elements. The main problem of recommender systems is to obtain an accurate and high-quality recommendation for the selection of educational objects in accordance with user preferences. It is concluded that it is necessary to build an architecture of a recommender system, including a model of an individual learning trajectory. Filtration of educational objects occurs with the help of a genetic algorithm. The expediency of using a microservice approach to create a web application is determined. The functional tasks of the developed system are highlighted, such as data collection, analysis of user requests, the formation of educational objects using an individual learning trajectory and the issuance of recommendations for choosing online courses. An algorithm for the functioning of the recommender system, a scheme for the operation of the recommender system, as well as information support for the operation of this system have been developed. A general approach to the development of a universal recommender system that can be integrated into the client's service is proposed. The purpose of developing a recommender system for choosing online courses is to provide students with the most appropriate learning objects (sequence of objects) to study in accordance with the characteristics of the student and fragments of knowledge (competencies).
{"title":"Development of the architecture of a recommendation system for choosing online courses","authors":"T. A. Shkodina","doi":"10.37791/2687-0649-2022-17-4-87-96","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-87-96","url":null,"abstract":"The article provides a rationale for the relevance of developing a recommender system in the field of e-learning. The main approaches to building a recommender system are analyzed: collaborative, content and hybrid filtering. The main objects of the recommender system for choosing online courses are presented: the student, training modules (online courses), elements of knowledge that the user can receive at the end of the training. In algorithmic support, methods for creating recommender systems, such as machine learning, neural networks, genetic algorithms, are considered. Problems in the methods of building recommender systems have been identified: sparseness; cold start; scalability; searching for elements that are most likely to be preferred by the user from a common set of elements. The main problem of recommender systems is to obtain an accurate and high-quality recommendation for the selection of educational objects in accordance with user preferences. It is concluded that it is necessary to build an architecture of a recommender system, including a model of an individual learning trajectory. Filtration of educational objects occurs with the help of a genetic algorithm. The expediency of using a microservice approach to create a web application is determined. The functional tasks of the developed system are highlighted, such as data collection, analysis of user requests, the formation of educational objects using an individual learning trajectory and the issuance of recommendations for choosing online courses. An algorithm for the functioning of the recommender system, a scheme for the operation of the recommender system, as well as information support for the operation of this system have been developed. A general approach to the development of a universal recommender system that can be integrated into the client's service is proposed. The purpose of developing a recommender system for choosing online courses is to provide students with the most appropriate learning objects (sequence of objects) to study in accordance with the characteristics of the student and fragments of knowledge (competencies).","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"20 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90869250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-47-56
A. E. Trubin, A. Morozov, A. E. Zubanova, V. Ozheredov, V. Korepanova
In the field of machine learning, there is no single methodology for data preprocessing, since all stages of this process are unique for a specific task. However, a specific data type is used in each direction. The research hypothesis assumes that it is possible to clearly structure the sequences and phases of data preparation for text recognition tasks. The article discusses the basic principles of data preprocessing and the allocation of successive stages as a specific technique for the task of recognizing ABC characters. ETL set images were selected as the source data. Preprocessing included the stages of working with images, at each of which changes were made to the source data. The first step was cropping, which allowed to get rid of unnecessary information in the image. Next, the approach of converting the image to the original aspect ratio was considered and the method of converting from shades of gray to black and white format was determined. At the next stage, the character lines were artificially expanded for better recognition of printed alphabets. At the last stage of data preprocessing, augmentation was performed, which made it possible to better recognize ABC characters regardless of their position in space. As a result, the general structure of the data preprocessing methodology for text recognition tasks was built.
{"title":"The method of preprocessing machine learning data for solving computer vision problems","authors":"A. E. Trubin, A. Morozov, A. E. Zubanova, V. Ozheredov, V. Korepanova","doi":"10.37791/2687-0649-2022-17-4-47-56","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-47-56","url":null,"abstract":"In the field of machine learning, there is no single methodology for data preprocessing, since all stages of this process are unique for a specific task. However, a specific data type is used in each direction. The research hypothesis assumes that it is possible to clearly structure the sequences and phases of data preparation for text recognition tasks. The article discusses the basic principles of data preprocessing and the allocation of successive stages as a specific technique for the task of recognizing ABC characters. ETL set images were selected as the source data. Preprocessing included the stages of working with images, at each of which changes were made to the source data. The first step was cropping, which allowed to get rid of unnecessary information in the image. Next, the approach of converting the image to the original aspect ratio was considered and the method of converting from shades of gray to black and white format was determined. At the next stage, the character lines were artificially expanded for better recognition of printed alphabets. At the last stage of data preprocessing, augmentation was performed, which made it possible to better recognize ABC characters regardless of their position in space. As a result, the general structure of the data preprocessing methodology for text recognition tasks was built.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"36 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86107974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.37791/2687-0649-2022-17-4-57-74
S. Golubev, Alexander L. Afanasiev, Alexander V. Kuritsyn
Currently, artificial intelligence is widely used in the formation of social, economic and environmental forecasts. When creating artificial intelligence, machine learning technologies, deep learning technology and searching for patterns in information arrays (Big Data), artificial language processing and generation technologies, etc. are widely used. At the same time, the issue of using artificial intelligence in scientific and technological forecasting has not been worked out enough. The purpose of the study was to find effective approaches to the use of artificial intelligence technologies in the formation of scientific and technological forecasts. The objective of the study was to identify artificial intelligence technologies that can be used at various stages of the life cycle of scientific and technological forecasting and to specify individual ways of using them to solve problems of predicting the level of development of science, engineering and technology compared to the world. This confirms the relevance of the study. The main research method is the analysis of domestic and foreign publications and best practices for using artificial intelligence technologies in scientific and technological forecasting, as well as the results of research work performed by the authors in the field of scientific and technological forecasting and adapting them to improve the formation of forecasts in the context of digital transformation of the economy and enterprises The authors considered the structure of artificial functions performed by technologies and identified priority areas for the use of artificial intelligence at various stages of scientific and technological forecasting. The expediency and features of the use of semantic analysis and cognitive technologies in predicting the level of readiness of equipment and technologies in comparison with the world under various scenario conditions are shown, which provides the greatest efficiency of the adopted solution. The issues of information and analytical support for the use of artificial intelligence in scientific and technological forecasting based on information technologies for decision support are considered. The novelty of the presented results lies in the fact that, for the first time, the authors describe the possibilities of using the most effective artificial intelligence technologies at various stages of the life cycle for the formation of scientific and technological forecasts from the standpoint of a systematic and integrated approach.
{"title":"The use of artificial intelligence technologies for scientific and technological forecasting","authors":"S. Golubev, Alexander L. Afanasiev, Alexander V. Kuritsyn","doi":"10.37791/2687-0649-2022-17-4-57-74","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-4-57-74","url":null,"abstract":"Currently, artificial intelligence is widely used in the formation of social, economic and environmental forecasts. When creating artificial intelligence, machine learning technologies, deep learning technology and searching for patterns in information arrays (Big Data), artificial language processing and generation technologies, etc. are widely used. At the same time, the issue of using artificial intelligence in scientific and technological forecasting has not been worked out enough. The purpose of the study was to find effective approaches to the use of artificial intelligence technologies in the formation of scientific and technological forecasts. The objective of the study was to identify artificial intelligence technologies that can be used at various stages of the life cycle of scientific and technological forecasting and to specify individual ways of using them to solve problems of predicting the level of development of science, engineering and technology compared to the world. This confirms the relevance of the study. The main research method is the analysis of domestic and foreign publications and best practices for using artificial intelligence technologies in scientific and technological forecasting, as well as the results of research work performed by the authors in the field of scientific and technological forecasting and adapting them to improve the formation of forecasts in the context of digital transformation of the economy and enterprises The authors considered the structure of artificial functions performed by technologies and identified priority areas for the use of artificial intelligence at various stages of scientific and technological forecasting. The expediency and features of the use of semantic analysis and cognitive technologies in predicting the level of readiness of equipment and technologies in comparison with the world under various scenario conditions are shown, which provides the greatest efficiency of the adopted solution. The issues of information and analytical support for the use of artificial intelligence in scientific and technological forecasting based on information technologies for decision support are considered. The novelty of the presented results lies in the fact that, for the first time, the authors describe the possibilities of using the most effective artificial intelligence technologies at various stages of the life cycle for the formation of scientific and technological forecasts from the standpoint of a systematic and integrated approach.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"5 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78676557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-31DOI: 10.37791/2687-0649-2022-17-3-105-116
D. M. Malinichev, K. K. Kuchmezov, V. Mochalov, O. V. Ratanova, Anton Andreev, S. A. Pokazanieva
The problem of building an information infrastructure resistant to computer attacks is relevant for organizing the work of any enterprise. Therefore, the ability to assess the existing or developing information infrastructure is very important. In this regard, the article deals with the problem of categorizing objects of critical information infrastructure in the context of the need to assess their relationship. The current legislative acts, which are the information base for determining the objects of critical information infrastructure and determining their purpose, structure and composition, are considered, as well as the criteria for the significance of objects are determined. The article also defines the links between critical information infrastructure objects, their resistance to computer attacks, as well as possible damage due to disruption of their functioning or the performance of a critical process. The article provides a description of the criteria that are subject to assessment and a methodology for assessing the stability of critical information infrastructure objects to computer attacks and assessing possible damage due to disruption of the functioning or performance of critical processes by objects of critical information infrastructure. An augmented solution is proposed for assessing the stability of the functioning of critical information infrastructure objects with various options for their connection. The possibility of assessing the cumulative damage due to disruption of the functioning of interconnected objects of critical information infrastructure is considered.
{"title":"Categorization of interconnected objects of critical information infrastructure","authors":"D. M. Malinichev, K. K. Kuchmezov, V. Mochalov, O. V. Ratanova, Anton Andreev, S. A. Pokazanieva","doi":"10.37791/2687-0649-2022-17-3-105-116","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-3-105-116","url":null,"abstract":"The problem of building an information infrastructure resistant to computer attacks is relevant for organizing the work of any enterprise. Therefore, the ability to assess the existing or developing information infrastructure is very important. In this regard, the article deals with the problem of categorizing objects of critical information infrastructure in the context of the need to assess their relationship. The current legislative acts, which are the information base for determining the objects of critical information infrastructure and determining their purpose, structure and composition, are considered, as well as the criteria for the significance of objects are determined. The article also defines the links between critical information infrastructure objects, their resistance to computer attacks, as well as possible damage due to disruption of their functioning or the performance of a critical process. The article provides a description of the criteria that are subject to assessment and a methodology for assessing the stability of critical information infrastructure objects to computer attacks and assessing possible damage due to disruption of the functioning or performance of critical processes by objects of critical information infrastructure. An augmented solution is proposed for assessing the stability of the functioning of critical information infrastructure objects with various options for their connection. The possibility of assessing the cumulative damage due to disruption of the functioning of interconnected objects of critical information infrastructure is considered.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"20 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85152204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-31DOI: 10.37791/2687-0649-2022-17-3-5-15
S.P. Stroev, A. V. Zakharov, Zhanna V. Meksheneva, Valentin V. Shokolov, A. M. Nechaev, N. N. Lyublinskaya
The paper presents the author's approach to solving the problem of sentiment analysis of online Russian-language messages about the activities of banks. The study data are customer reviews about banks in general and their products, services and quality of service posted on the Banki.ru portal. In this paper, the problem of text sentiment analysis is considered as a binary classification task based on a set of positive and negative reviews. A vector model with a tf-idf weighting scheme was used to represent the collected and preprocessed texts. The following algorithms with the selection of optimal parameters on the grid were used for binary classification task: naive Bayesian classifier, support vector machine, logistic regression, random forest and gradient boosting. Standard statistical metrics, such as accuracy, completeness, and F-measure, were used to evaluate the quality of solving the classification problem. For the indicated metrics, the best results were obtained on the classification model developed with the use of Support Vector Machine. Thematic text modeling was also carried out using the Dirichlet latent placement method to define the most typical topics of customer messages. As a result, it was concluded that the most popular message topics are "cards" and "quality of service". The obtained results can be used in the activities of banks to automate its reputation monitoring in the media and when routing client requests to solve various problems. When solving problems, the features of the Python programming language were actively used, namely, libraries for web scraping, machine learning, and natural language processing.
{"title":"Text sentiment analysis in banking","authors":"S.P. Stroev, A. V. Zakharov, Zhanna V. Meksheneva, Valentin V. Shokolov, A. M. Nechaev, N. N. Lyublinskaya","doi":"10.37791/2687-0649-2022-17-3-5-15","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-3-5-15","url":null,"abstract":"The paper presents the author's approach to solving the problem of sentiment analysis of online Russian-language messages about the activities of banks. The study data are customer reviews about banks in general and their products, services and quality of service posted on the Banki.ru portal. In this paper, the problem of text sentiment analysis is considered as a binary classification task based on a set of positive and negative reviews. A vector model with a tf-idf weighting scheme was used to represent the collected and preprocessed texts. The following algorithms with the selection of optimal parameters on the grid were used for binary classification task: naive Bayesian classifier, support vector machine, logistic regression, random forest and gradient boosting. Standard statistical metrics, such as accuracy, completeness, and F-measure, were used to evaluate the quality of solving the classification problem. For the indicated metrics, the best results were obtained on the classification model developed with the use of Support Vector Machine. Thematic text modeling was also carried out using the Dirichlet latent placement method to define the most typical topics of customer messages. As a result, it was concluded that the most popular message topics are \"cards\" and \"quality of service\". The obtained results can be used in the activities of banks to automate its reputation monitoring in the media and when routing client requests to solve various problems. When solving problems, the features of the Python programming language were actively used, namely, libraries for web scraping, machine learning, and natural language processing.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"2 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88226210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-31DOI: 10.37791/2687-0649-2022-17-3-73-83
K. Klyshnikov, E. Ovcharenko, V. Danilov, V. Ganyukov, L. Barbarash
A significant increase in the number of transcatheter aortic valve replacements entails the development of auxiliary systems that solve the problem of intra- or preoperative assistance to such interventions. The main concept of such systems is the concept of computerized automatic anatomical recognition of the main landmarks that are key to the procedure. In the case of transcatheter prosthetics – elements of the aortic root and delivery system. This work is aimed at demonstrating the potential of using machine learning methods, the modern architecture of the ResNet V2 convolutional neural network, for the task of intraoperative real-time tracking of the main anatomical landmarks during transcatheter aortic valve replacement. The basis for training the chosen architecture of the neural network was the clinical graphical data of 5 patients who received transcatheter aortic valve replacement using commercial CoreValve systems (Medtronic Inc., USA). The intraoperative aortographs obtained during such an intervention with visualization of the main anatomical landmarks: elements of the fibrous ring of the aortic valve, sinotubular articulation and elements of the delivery system, became the output data for the work of the selected neural network. The total number of images was 2000, which were randomly distributed into two subsamples: 1400 images for training; 600 – for validation. It is shown that the used architecture of the neural network is capable of performing detection with an accuracy of 95-96% in terms of the metrics of the classification and localization components, however, to a large extent does not meet the performance requirements (processing speed): the processing time for one aortography frame was 0.097 sec. The results obtained determine the further direction of development of automatic anatomical recognition of the main landmarks in transcatheter aortic valve replacement from the standpoint of creating an assisting system – reducing the time of analysis of each frame due to the optimization methods described in the literature.
{"title":"Machine learning for detection of aortic root landmarks","authors":"K. Klyshnikov, E. Ovcharenko, V. Danilov, V. Ganyukov, L. Barbarash","doi":"10.37791/2687-0649-2022-17-3-73-83","DOIUrl":"https://doi.org/10.37791/2687-0649-2022-17-3-73-83","url":null,"abstract":"A significant increase in the number of transcatheter aortic valve replacements entails the development of auxiliary systems that solve the problem of intra- or preoperative assistance to such interventions. The main concept of such systems is the concept of computerized automatic anatomical recognition of the main landmarks that are key to the procedure. In the case of transcatheter prosthetics – elements of the aortic root and delivery system. This work is aimed at demonstrating the potential of using machine learning methods, the modern architecture of the ResNet V2 convolutional neural network, for the task of intraoperative real-time tracking of the main anatomical landmarks during transcatheter aortic valve replacement. The basis for training the chosen architecture of the neural network was the clinical graphical data of 5 patients who received transcatheter aortic valve replacement using commercial CoreValve systems (Medtronic Inc., USA). The intraoperative aortographs obtained during such an intervention with visualization of the main anatomical landmarks: elements of the fibrous ring of the aortic valve, sinotubular articulation and elements of the delivery system, became the output data for the work of the selected neural network. The total number of images was 2000, which were randomly distributed into two subsamples: 1400 images for training; 600 – for validation. It is shown that the used architecture of the neural network is capable of performing detection with an accuracy of 95-96% in terms of the metrics of the classification and localization components, however, to a large extent does not meet the performance requirements (processing speed): the processing time for one aortography frame was 0.097 sec. The results obtained determine the further direction of development of automatic anatomical recognition of the main landmarks in transcatheter aortic valve replacement from the standpoint of creating an assisting system – reducing the time of analysis of each frame due to the optimization methods described in the literature.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":"9 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74393807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}