Pub Date : 2022-01-01DOI: 10.34229/1028-0979-2022-1-7
D. Ratov
Researches of the subject area of lossless information compression and with data loss are carried out and data compression algorithms with minimal redundancy are considered: Shannon-Fano coding, Huffman coding and compression using a dictionary: Lempel-Ziv coding. In the course of the work, the theoretical foundations of data compression were used, studies of various methods of data compression were carried out, the best methods of archiving with encryption and storage of various kinds of data were identified. The method of archiving data in the work is used for the purpose of safe and rational placement of information on external media and its protection from deliberate or accidental destruction or loss. In the Embarcadero RAD Studio XE8 integrated development environment, a software package for an archiver with code protection of information has been developed. The archiverʼs mechanism of operation is based on the creation and processing of streaming data. The core of the archiver is the function of compressing and decompressing files using the Lempel-Ziv method. As a method and means of protecting information in the archive, poly-alphabetic substitution (Viziner cipher) was used. The results of the work, in particular, the developed software can be practically used for archival storage of protected information; the mechanism of data archiving and encryption can be used in information transmission systems in order to reduce network traffic and ensure data security. The resulting encryption and archiving software was used in the module of the software package «Diplomas SNU v.2.6.1», which was developed at the Volodymyr Dal East Ukrainian National University. This complex is designed to create a unified register of diplomas at the university, automate the creation of files-diplomas of higher education in the multifunctional graphics editor Adobe Photoshop. The controller exports all data for analysis and formation of diplomas from the parameters of the corresponding XML files downloaded from the unified state education database in compressed zip archives. The developed module performs the process of unzipping and receiving XML-files with parameters for the further work of the complex «Diplomas SNU v.2.6.1».
对无损信息压缩和数据丢失的主题领域进行了研究,并考虑了具有最小冗余的数据压缩算法:Shannon Fano编码、Huffman编码和使用字典的压缩:Lempel-Ziv编码。在工作过程中,利用了数据压缩的理论基础,对各种数据压缩方法进行了研究,确定了加密归档和存储各种数据的最佳方法。作品中的数据归档方法是为了将信息安全合理地放置在外部媒体上,并保护其免受故意或意外的破坏或丢失。在Embarcadero RAD Studio XE8集成开发环境中,开发了一个用于信息代码保护的归档程序的软件包。归档器的操作机制基于流数据的创建和处理。归档器的核心是使用Lempel-Ziv方法压缩和解压缩文件的功能。作为一种保护档案中信息的方法和手段,使用了多字母替换(Viziner密码)。工作成果,特别是开发的软件可以实际用于受保护信息的档案存储;数据存档和加密机制可以用于信息传输系统,以减少网络流量并确保数据安全。由此产生的加密和归档软件被用于Volodymyr Dal东乌克兰国立大学开发的软件包“Diplomas SNU v.2.6.1”的模块中。该综合体旨在创建一个统一的大学文凭登记册,在多功能图形编辑器Adobe Photoshop中自动创建高等教育文凭文件。控制器从压缩zip档案中的统一国家教育数据库下载的相应XML文件的参数中导出用于分析和形成文凭的所有数据。开发的模块执行解压缩和接收带有参数的XML文件的过程,用于复杂的«Diplomas SNU v.2.6.1»的进一步工作。
{"title":"DEVELOPMENT OF METHOD AND SOFTWARE FOR COMPRESSION AND ENCRYPTION OF INFORMATION","authors":"D. Ratov","doi":"10.34229/1028-0979-2022-1-7","DOIUrl":"https://doi.org/10.34229/1028-0979-2022-1-7","url":null,"abstract":"Researches of the subject area of lossless information compression and with data loss are carried out and data compression algorithms with minimal redundancy are considered: Shannon-Fano coding, Huffman coding and compression using a dictionary: Lempel-Ziv coding. In the course of the work, the theoretical foundations of data compression were used, studies of various methods of data compression were carried out, the best methods of archiving with encryption and storage of various kinds of data were identified. The method of archiving data in the work is used for the purpose of safe and rational placement of information on external media and its protection from deliberate or accidental destruction or loss. In the Embarcadero RAD Studio XE8 integrated development environment, a software package for an archiver with code protection of information has been developed. The archiverʼs mechanism of operation is based on the creation and processing of streaming data. The core of the archiver is the function of compressing and decompressing files using the Lempel-Ziv method. As a method and means of protecting information in the archive, poly-alphabetic substitution (Viziner cipher) was used. The results of the work, in particular, the developed software can be practically used for archival storage of protected information; the mechanism of data archiving and encryption can be used in information transmission systems in order to reduce network traffic and ensure data security. The resulting encryption and archiving software was used in the module of the software package «Diplomas SNU v.2.6.1», which was developed at the Volodymyr Dal East Ukrainian National University. This complex is designed to create a unified register of diplomas at the university, automate the creation of files-diplomas of higher education in the multifunctional graphics editor Adobe Photoshop. The controller exports all data for analysis and formation of diplomas from the parameters of the corresponding XML files downloaded from the unified state education database in compressed zip archives. The developed module performs the process of unzipping and receiving XML-files with parameters for the further work of the complex «Diplomas SNU v.2.6.1».","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47852599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.34229/1028-0979-2022-1-6
V. Bogaenko, Vladimir Bulavatsky
The paper considers the problem of modeling the dynamics of computer viruses spreading using a model based on the mathematical theory of biological epidemics. The urgency of the considered problem arises from the need to build effective anti-virus protection systems for computer networks based on the results of mathematical modeling of the spread of malicious software. We consider the SIES-model (Gan C., Yang X., Zhu Q.), that studies spread dynamics of computer viruses separating the influence of the action of computers accessible and unavailable on the Internet. In order to take into account non-local effects in this model, in particular memory effects, its modification on the ideas of the theory of fractional-order integro-differentiation is proposed. The technique of obtaining a numerical-analytical solution of the problem of modeling of computer viruses spread dynamics on the base of the fractional-differential counterpart of the SIES-model is presented. Closed forms solutions of the problems for the number of vulnerable and external computers are obtained, and a finite-difference scheme of the fractional Adams method for the problem of determining the number of infected computers is constructed. The results of computational experiments based on the developed technique of numerical-analytical solution show that there is a subdiffusion evolution of the system to the steady state. At the same time, for the number of external computers, a fast short-term growth is observed at the initial stages of process development with subsequent smooth and slow decrease towards the steady state. For medium and large values of the time variable, the evolution of the number of infected computers to the steady state occurs in an ultra-slow mode. Thus, the proposed technique makes it possible to study the families of dynamic reactions in the process of computer viruses spreading, including fast transient processes and ultra-slow evolution of systems with memory.
{"title":"NUMERICAL-ANALYTIC SOLUTION OF ONE MODELING PROBLEM OF FRACTIONAL-DIFFERENTIAL DYNAMICS OF COMPUTER VIRUSES","authors":"V. Bogaenko, Vladimir Bulavatsky","doi":"10.34229/1028-0979-2022-1-6","DOIUrl":"https://doi.org/10.34229/1028-0979-2022-1-6","url":null,"abstract":"The paper considers the problem of modeling the dynamics of computer viruses spreading using a model based on the mathematical theory of biological epidemics. The urgency of the considered problem arises from the need to build effective anti-virus protection systems for computer networks based on the results of mathematical modeling of the spread of malicious software. We consider the SIES-model (Gan C., Yang X., Zhu Q.), that studies spread dynamics of computer viruses separating the influence of the action of computers accessible and unavailable on the Internet. In order to take into account non-local effects in this model, in particular memory effects, its modification on the ideas of the theory of fractional-order integro-differentiation is proposed. The technique of obtaining a numerical-analytical solution of the problem of modeling of computer viruses spread dynamics on the base of the fractional-differential counterpart of the SIES-model is presented. Closed forms solutions of the problems for the number of vulnerable and external computers are obtained, and a finite-difference scheme of the fractional Adams method for the problem of determining the number of infected computers is constructed. The results of computational experiments based on the developed technique of numerical-analytical solution show that there is a subdiffusion evolution of the system to the steady state. At the same time, for the number of external computers, a fast short-term growth is observed at the initial stages of process development with subsequent smooth and slow decrease towards the steady state. For medium and large values of the time variable, the evolution of the number of infected computers to the steady state occurs in an ultra-slow mode. Thus, the proposed technique makes it possible to study the families of dynamic reactions in the process of computer viruses spreading, including fast transient processes and ultra-slow evolution of systems with memory.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47024094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.34229/1028-0979-2022-1-12
L. Fainzilberg
For early detection and timely correction of imbalances in the body that can lead to the development of various diseases, personalized devices are needed with which one can control the current state of the body at home. The purpose of the article is to develop a universal approach to the construction of such tools and, using examples of solving urgent problems, to demonstrate its effectiveness. A distinctive feature of the proposed approach is that the user at home has the ability to form a training sample of observations of his physiological indicators, according to which two integral characteristics are automatically calculated: the reference result, which is closest to all other observations, and the value, characterizing the average deviation of the results. Personalized diagnostic rules are proposed that ensure an increase in the reliability of decisions about the current functional state of the user and an assessment of the risk of a possible development of pathology. The proposed rules form the basis of original preventive medicine for home use, including the intelligent PHASEGRAPH® electrocardiograph for diagnosing myocardial ischemia at early stages, AI-RHYTHMOGRAPH software applications for determining heart rate variability parameters and AI-ARTERIOGRAPH for integral assessment of properties blood vessels, an intelligent blood pressure monitor that measures the long-term variability in blood pressure between doctor visits, and an intelligent stethoscope for detecting respiratory distress at home. Further development of the proposed approach will make it possible to create personalized means of assessing visual acuity and hearing acuity at home, control of the vestibular apparatus, essential tremor and other means.
{"title":"GENERALIZED APPROACH TO BUILDING COMPUTER’S TOOLS OF PREVENTIVE MEDICINE FOR HOME USING","authors":"L. Fainzilberg","doi":"10.34229/1028-0979-2022-1-12","DOIUrl":"https://doi.org/10.34229/1028-0979-2022-1-12","url":null,"abstract":"For early detection and timely correction of imbalances in the body that can lead to the development of various diseases, personalized devices are needed with which one can control the current state of the body at home. The purpose of the article is to develop a universal approach to the construction of such tools and, using examples of solving urgent problems, to demonstrate its effectiveness. A distinctive feature of the proposed approach is that the user at home has the ability to form a training sample of observations of his physiological indicators, according to which two integral characteristics are automatically calculated: the reference result, which is closest to all other observations, and the value, characterizing the average deviation of the results. Personalized diagnostic rules are proposed that ensure an increase in the reliability of decisions about the current functional state of the user and an assessment of the risk of a possible development of pathology. The proposed rules form the basis of original preventive medicine for home use, including the intelligent PHASEGRAPH® electrocardiograph for diagnosing myocardial ischemia at early stages, AI-RHYTHMOGRAPH software applications for determining heart rate variability parameters and AI-ARTERIOGRAPH for integral assessment of properties blood vessels, an intelligent blood pressure monitor that measures the long-term variability in blood pressure between doctor visits, and an intelligent stethoscope for detecting respiratory distress at home. Further development of the proposed approach will make it possible to create personalized means of assessing visual acuity and hearing acuity at home, control of the vestibular apparatus, essential tremor and other means.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42923893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.34229/1028-0979-2022-1-1
A. Voronin, A. Savchenko
In various subject areas, the problem of such a distribution of the resources of a controlled system between individual elements (objects) is relevant, which ensures the most efficient functioning of the system in given circumstances. The problem of distribution of the given global resource is considered at restrictions from below, applied on partial resources. It is shown, that the problem consists in construction of adequate criterion function for optimization of process of distribution of resources in conditions of their limitation. The objective function is a scalar convolution of the partial resource vector. Requirements for the objective function: it must penalize partial resources for dangerously approaching its limits and be differentiable in its arguments. In the problem under consideration, partial resources have a dual nature. On the one hand, they can be considered as independent variables, arguments for the optimization of the objective function. On the other hand, it is logical for each of the objects to strive to maximize its partial resource, to go as far as possible from a dangerous limitation in order to increase the efficiency of its functioning. From this point of view, resources can be considered as particular criteria for the quality of the functioning of the corresponding objects. These criteria are subject to maximization, they are limited from below, non-negative and contradictory (an increase in one resource is possible only at the expense of a decrease in others). For the decision of a considered problem the approach of multicriteria optimization with use of the nonlinear trade-off scheme is undertaken. The proposed approach is recommended for a compromise-optimal allocation of resources in a wide range of practical problems. The illustrating example is given.
{"title":"RESOURCE DISTRIBUTION PROBLEM","authors":"A. Voronin, A. Savchenko","doi":"10.34229/1028-0979-2022-1-1","DOIUrl":"https://doi.org/10.34229/1028-0979-2022-1-1","url":null,"abstract":"In various subject areas, the problem of such a distribution of the resources of a controlled system between individual elements (objects) is relevant, which ensures the most efficient functioning of the system in given circumstances. The problem of distribution of the given global resource is considered at restrictions from below, applied on partial resources. It is shown, that the problem consists in construction of adequate criterion function for optimization of process of distribution of resources in conditions of their limitation. The objective function is a scalar convolution of the partial resource vector. Requirements for the objective function: it must penalize partial resources for dangerously approaching its limits and be differentiable in its arguments. In the problem under consideration, partial resources have a dual nature. On the one hand, they can be considered as independent variables, arguments for the optimization of the objective function. On the other hand, it is logical for each of the objects to strive to maximize its partial resource, to go as far as possible from a dangerous limitation in order to increase the efficiency of its functioning. From this point of view, resources can be considered as particular criteria for the quality of the functioning of the corresponding objects. These criteria are subject to maximization, they are limited from below, non-negative and contradictory (an increase in one resource is possible only at the expense of a decrease in others). For the decision of a considered problem the approach of multicriteria optimization with use of the nonlinear trade-off scheme is undertaken. The proposed approach is recommended for a compromise-optimal allocation of resources in a wide range of practical problems. The illustrating example is given.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47167618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.34229/1028-0979-2021-6-9
Andrey Litvynchuk, L. Baranovska
Face recognition is one of the main tasks of computer vision, which is relevant due to its practical significance and great interest of wide range of scientists. It has many applications, which has led to a huge amount of research in this area. And although research in the field has been going on since the beginning of the computer vision, good results could be achieved only with the help of convolutional neural networks. In this work, a comparative analysis of facial recognition methods before convolutional neural networks was performed. A metric learning approach, augmentations and learning rate schedulers are considered. There were performed bunch of experiments and comparative analysis of the considered methods of improvement of convolutional neural networks. As a result a universal algorithm for training the face recognition model was obtained. In this work, we used SE-ResNet50 as the only neural network for experiments. Metric learning is a method by which it is possible to achieve good accuracy in face recognition. Overfitting is a big problem of neural networks, in particular because they have too many parameters and usually not enough data to guarantee the generalization of the model. Additional data labeling can be time-consuming and expensive, so there is such an approach as augmentation. Augmentations artificially increase the training dataset, so as expected, this method improved the results relative to the original experiment in all experiments. Different degrees and more aggressive forms of augmentation in this work led to better results. As expected, the best learning rate scheduler was cosine scheduler with warm-ups and restarts. This schedule has few parameters, so it is also easy to use. In general, using different approaches, we were able to obtain an accuracy of 93,5 %, which is 22 % better than the baseline experiment. In the following studies, it is planned to consider improving not only the model of facial recognition, but also detection. The accuracy of face detection directly depends on the quality of face recognition.
{"title":"IMPROVING FACE RECOGNITION MODELS USING METRIC LEARNING, LEARNING RATE SCHEDULERS, AND AUGMENTATIONS","authors":"Andrey Litvynchuk, L. Baranovska","doi":"10.34229/1028-0979-2021-6-9","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-9","url":null,"abstract":"Face recognition is one of the main tasks of computer vision, which is relevant due to its practical significance and great interest of wide range of scientists. It has many applications, which has led to a huge amount of research in this area. And although research in the field has been going on since the beginning of the computer vision, good results could be achieved only with the help of convolutional neural networks. In this work, a comparative analysis of facial recognition methods before convolutional neural networks was performed. A metric learning approach, augmentations and learning rate schedulers are considered. There were performed bunch of experiments and comparative analysis of the considered methods of improvement of convolutional neural networks. As a result a universal algorithm for training the face recognition model was obtained. In this work, we used SE-ResNet50 as the only neural network for experiments. Metric learning is a method by which it is possible to achieve good accuracy in face recognition. Overfitting is a big problem of neural networks, in particular because they have too many parameters and usually not enough data to guarantee the generalization of the model. Additional data labeling can be time-consuming and expensive, so there is such an approach as augmentation. Augmentations artificially increase the training dataset, so as expected, this method improved the results relative to the original experiment in all experiments. Different degrees and more aggressive forms of augmentation in this work led to better results. As expected, the best learning rate scheduler was cosine scheduler with warm-ups and restarts. This schedule has few parameters, so it is also easy to use. In general, using different approaches, we were able to obtain an accuracy of 93,5 %, which is 22 % better than the baseline experiment. In the following studies, it is planned to consider improving not only the model of facial recognition, but also detection. The accuracy of face detection directly depends on the quality of face recognition.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49008088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As computing and Internet connections become general-purpose technologies and services aimed at broad global markets, questions arise about the effectiveness of such markets in terms of public welfare, the participation of differentiated service providers and end-users. Motorola’s Iridium Global Communications project was completed in the 1990s due to similar issues, reaching the goal of technological connectivity for the first time. As Internet services are characterized by high innovation, differentiation and dynamism, they can use well-known models of differentiated products. However, the demand functions in such models are hyperbolic rather than linear. In addition, such models are stochastic and include providers with different ways of competing. In the Internet ecosystem, the links between Internet service providers (ISPs) as telecommunications operators and content service providers are important, especially high-bandwidth video content providers. As increasing bandwidth requires new investments in network capacity, both video content providers and ISPs need to be motivated to do so. In order to analyze the relationships between Internet service providers and content providers in the Internet ecosystem, computable models, based on the construction of payoff functions for all the participants in the ecosystem, are suggested. The introduction of paid content browsing will motivate Internet service providers to invest in increasing the capacity of the global network, which has a trend of exponential growth. At the same time, such a browsing will violate the principles of net neutrality, which provides grounds for the development of new tasks to minimize the violations of net neutrality and maximize the social welfare of the Internet ecosystem. The models point to the importance of the efficiency of Internet service providers, the predictability of demand and the high price elasticity of innovative services.
{"title":"STRATEGIC INTERACTION OF PROVIDERS FOR DIFFERENTIATED INTERNET SERVICES","authors":"Alexey Gaivoronski, Vasily Gorbachuk, Maxim Dunaievskiy","doi":"10.34229/1028-0979-2021-6-10","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-10","url":null,"abstract":"As computing and Internet connections become general-purpose technologies and services aimed at broad global markets, questions arise about the effectiveness of such markets in terms of public welfare, the participation of differentiated service providers and end-users. Motorola’s Iridium Global Communications project was completed in the 1990s due to similar issues, reaching the goal of technological connectivity for the first time. As Internet services are characterized by high innovation, differentiation and dynamism, they can use well-known models of differentiated products. However, the demand functions in such models are hyperbolic rather than linear. In addition, such models are stochastic and include providers with different ways of competing. In the Internet ecosystem, the links between Internet service providers (ISPs) as telecommunications operators and content service providers are important, especially high-bandwidth video content providers. As increasing bandwidth requires new investments in network capacity, both video content providers and ISPs need to be motivated to do so. In order to analyze the relationships between Internet service providers and content providers in the Internet ecosystem, computable models, based on the construction of payoff functions for all the participants in the ecosystem, are suggested. The introduction of paid content browsing will motivate Internet service providers to invest in increasing the capacity of the global network, which has a trend of exponential growth. At the same time, such a browsing will violate the principles of net neutrality, which provides grounds for the development of new tasks to minimize the violations of net neutrality and maximize the social welfare of the Internet ecosystem. The models point to the importance of the efficiency of Internet service providers, the predictability of demand and the high price elasticity of innovative services.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48009371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.34229/1028-0979-2021-6-1
P. Knopov, T. Pepelyaeva, Sergey Shpiga
In recent years, a new direction of research has emerged in the theory of stochastic differential equations, namely, stochastic differential equations with a fractional Wiener process. This class of processes makes it possible to describe adequately many real phenomena of a stochastic nature in financial mathematics, hydrology, biology, and many other areas. These phenomena are not always described by stochastic systems satisfying the conditions of strong mixing, or weak dependence, but are described by systems with a strong dependence, and this strong dependence is regulated by the so-called Hurst parameter, which is a characteristic of this dependence. In this article, we consider the problem of the existence of an optimal control for a stochastic differential equation with a fractional Wiener process, in which the diffusion coefficient is present, which gives more accurate simulation results. An existence theorem is proved for an optimal control of a process that satisfies the corresponding stochastic differential equation. The main result was obtained using the Girsanov theorem for such processes and the existence theorem for a weak solution for stochastic equations with a fractional Wiener process.
{"title":"ON OPTIMAL CONTROL OF A STOCHASTIC EQUATION WITH A FRACTIONAL WIENER PROCESS","authors":"P. Knopov, T. Pepelyaeva, Sergey Shpiga","doi":"10.34229/1028-0979-2021-6-1","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-1","url":null,"abstract":"In recent years, a new direction of research has emerged in the theory of stochastic differential equations, namely, stochastic differential equations with a fractional Wiener process. This class of processes makes it possible to describe adequately many real phenomena of a stochastic nature in financial mathematics, hydrology, biology, and many other areas. These phenomena are not always described by stochastic systems satisfying the conditions of strong mixing, or weak dependence, but are described by systems with a strong dependence, and this strong dependence is regulated by the so-called Hurst parameter, which is a characteristic of this dependence. In this article, we consider the problem of the existence of an optimal control for a stochastic differential equation with a fractional Wiener process, in which the diffusion coefficient is present, which gives more accurate simulation results. An existence theorem is proved for an optimal control of a process that satisfies the corresponding stochastic differential equation. The main result was obtained using the Girsanov theorem for such processes and the existence theorem for a weak solution for stochastic equations with a fractional Wiener process.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41772743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.34229/1028-0979-2021-6-13
Yekaterina Kovalоva, V. Lyfar
The paper considers the problems of informational implementation of neonatal screening of newborns in order to improve the overall picture of the nation's health and prevent the development of hereditary diseases. The methodology for solving the problems of complete neonatal screening is based on the methods and mathematical apparatus of discrete mathematics, web technologies, data warehouses, and data mining methods. An information model of the dynamic processes of neonatal screening is proposed, based on the specific processing of data presented by a tuple, which contains coherent sequential processes for obtaining the results of tests for blood analysis of newborns, conducting genetic studies and determining pathologies and deviations from an expanded list (currently up to 44 indicators for the purpose of exiting for more than 60). The block diagram of information support of information technology in the decision support system for carrying out neonatal screening of hereditary metabolic diseases is presented. On the basis of LLC «CDC «PHARMBIOTEST», the research of the algorithm for performing sequential procedures of neonatal screening was carried out. The described algorithm of actions has been tested and fully tested for the continuity of information flows, the stability of the information model graph. As a result of the research, the sufficiency and completeness of the chronological indicators of the processing of information flows have been proved. The criteria for confirming the authenticity of methods for obtaining a diagnosis have been developed.
{"title":"MODELS AND METHODS OF INFORMATION TECHNOLOGY OF ADVANCED NEONOTAL SCREENING","authors":"Yekaterina Kovalоva, V. Lyfar","doi":"10.34229/1028-0979-2021-6-13","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-13","url":null,"abstract":"The paper considers the problems of informational implementation of neonatal screening of newborns in order to improve the overall picture of the nation's health and prevent the development of hereditary diseases. The methodology for solving the problems of complete neonatal screening is based on the methods and mathematical apparatus of discrete mathematics, web technologies, data warehouses, and data mining methods. An information model of the dynamic processes of neonatal screening is proposed, based on the specific processing of data presented by a tuple, which contains coherent sequential processes for obtaining the results of tests for blood analysis of newborns, conducting genetic studies and determining pathologies and deviations from an expanded list (currently up to 44 indicators for the purpose of exiting for more than 60). The block diagram of information support of information technology in the decision support system for carrying out neonatal screening of hereditary metabolic diseases is presented. On the basis of LLC «CDC «PHARMBIOTEST», the research of the algorithm for performing sequential procedures of neonatal screening was carried out. The described algorithm of actions has been tested and fully tested for the continuity of information flows, the stability of the information model graph. As a result of the research, the sufficiency and completeness of the chronological indicators of the processing of information flows have been proved. The criteria for confirming the authenticity of methods for obtaining a diagnosis have been developed.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42826956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.34229/1028-0979-2021-6-12
S. Smirnov
The problem of design of experiment with resource constraints is investigated. For a complex system intended for experimental research, before using the well known advanced methods of factorial design, you must first create a simplified mathematical model that represents an incomplete abbreviated description of the system. At the same time, on this simplification from all objectively existing independent parameters of the system remain only the most important parameters, which is a forced procedure due to the natural limitations of the resources available to perform the experimental study. The same constraints limit the number of values assigned to each of the parameters (factor levels number). The article is devoted to the modification of the existing method of discretization of such a model with a rational choice of discretization parameters in accordance with the existing limitations, but with an extremely unreliable in terms of convergence iterative solution procedure. The main ideas of the modified approach are as follows: 0) The choice of the number of levels of factors is proportional to the importance of the relevant parameters and the reduction to the problem of finding a fixed point (as in the known method). 1) Probability partition (instead of partition into equal length intervals) for discretization and selection of representative values of the parameter, which allows to find an exact simple expression for its Shannon entropy. 2) Transition from multi- to one-parameter (coefficient of proportionality as an indicator of parameterization) representation of nonlinear mapping, its decomposition and simplification of the iterative process. 3) Finding the initial value of the coefficient of proportionality for a factor with average relevance and calculations for other factors, followed by iterative refinement. The iterative process is guaranteed to coincide, because the consideration of small and large values of the scalar parameter allows us to use the theorem on the intermediate value of a continuous function. Then, with the help of the developed procedure, two tasks on the assignment of the number of factor levels for situations with small and large resource constraints are solved, the corresponding complications in the calculations and ways to overcome them are indicated.
{"title":"FACTORS AND LEVELS ON DESIGN OF EXPERIMENT, EFECTIVE CHOICE UNDER CONSTRAINS","authors":"S. Smirnov","doi":"10.34229/1028-0979-2021-6-12","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-12","url":null,"abstract":"The problem of design of experiment with resource constraints is investigated. For a complex system intended for experimental research, before using the well known advanced methods of factorial design, you must first create a simplified mathematical model that represents an incomplete abbreviated description of the system. At the same time, on this simplification from all objectively existing independent parameters of the system remain only the most important parameters, which is a forced procedure due to the natural limitations of the resources available to perform the experimental study. The same constraints limit the number of values assigned to each of the parameters (factor levels number). The article is devoted to the modification of the existing method of discretization of such a model with a rational choice of discretization parameters in accordance with the existing limitations, but with an extremely unreliable in terms of convergence iterative solution procedure. The main ideas of the modified approach are as follows: 0) The choice of the number of levels of factors is proportional to the importance of the relevant parameters and the reduction to the problem of finding a fixed point (as in the known method). 1) Probability partition (instead of partition into equal length intervals) for discretization and selection of representative values of the parameter, which allows to find an exact simple expression for its Shannon entropy. 2) Transition from multi- to one-parameter (coefficient of proportionality as an indicator of parameterization) representation of nonlinear mapping, its decomposition and simplification of the iterative process. 3) Finding the initial value of the coefficient of proportionality for a factor with average relevance and calculations for other factors, followed by iterative refinement. The iterative process is guaranteed to coincide, because the consideration of small and large values of the scalar parameter allows us to use the theorem on the intermediate value of a continuous function. Then, with the help of the developed procedure, two tasks on the assignment of the number of factor levels for situations with small and large resource constraints are solved, the corresponding complications in the calculations and ways to overcome them are indicated.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45291128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.34229/1028-0979-2021-6-3
A. Chikrii, K. Chikrii
The quasi-linear conflict-controlled processes of general form are studied. The theme for investigation is the problem of the trajectories approaching a given cylindrical set. The research is based on the method of upper and lower resolving functions. The main attention is paid to the case when Pontryagin’s condition does not hold, moreover, the bodily part of the terminal set is non-convex. A scheme of the method is proposed, which allows, in the case of non-convexity of the body part, to fix some point in it, namely the aiming point, and to realize the process of approach. Sufficient conditions are obtained for solving the problem of approach for different classes of strategies. In so doing, the Hayek stroboscopic strategies that prescribe control by N.N. Krasovskii are applied. The process of approach goes on in two stages — active and passive. On the active stage the upper resolving function of second type is accumulated and after the moment of switching the lower resolving function of second type is used. These functions allow constructing a measurable control of second player on the basis of the theorems on measurable choice, in particular, the Filippov-Castaing theorem. The obtained results for generalized quasi-linear processes make it possible to encompass a wide range of functional-differential systems as well as the systems with fractional and partial derivatives. Possibilities for development of the offered technique are specified.
{"title":"ON THE UPPER AND LOWER RESOLVING FUNCTIONS IN GAME PROBLEMS OF DYNAMICS","authors":"A. Chikrii, K. Chikrii","doi":"10.34229/1028-0979-2021-6-3","DOIUrl":"https://doi.org/10.34229/1028-0979-2021-6-3","url":null,"abstract":"The quasi-linear conflict-controlled processes of general form are studied. The theme for investigation is the problem of the trajectories approaching a given cylindrical set. The research is based on the method of upper and lower resolving functions. The main attention is paid to the case when Pontryagin’s condition does not hold, moreover, the bodily part of the terminal set is non-convex. A scheme of the method is proposed, which allows, in the case of non-convexity of the body part, to fix some point in it, namely the aiming point, and to realize the process of approach. Sufficient conditions are obtained for solving the problem of approach for different classes of strategies. In so doing, the Hayek stroboscopic strategies that prescribe control by N.N. Krasovskii are applied. The process of approach goes on in two stages — active and passive. On the active stage the upper resolving function of second type is accumulated and after the moment of switching the lower resolving function of second type is used. These functions allow constructing a measurable control of second player on the basis of the theorems on measurable choice, in particular, the Filippov-Castaing theorem. The obtained results for generalized quasi-linear processes make it possible to encompass a wide range of functional-differential systems as well as the systems with fractional and partial derivatives. Possibilities for development of the offered technique are specified.","PeriodicalId":54874,"journal":{"name":"Journal of Automation and Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47777467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}