Pub Date : 2021-09-30DOI: 10.17212/2782-2001-2021-3-19-36
Alexandr Bulygin, A. Kashevnik
The article analyzes the methods of detecting driver fatigue which are described in modern literature. There are a great variety of methods for assessing the functional state of a person. A functional state is an integral set of characteristics of those functions and qualities of a person that directly or indirectly determine the performance of any activity. The physical and mental state of a person, the success of his work, training, creativity depends on the functional state of the organism. The assessment of dynamic driver behavior has become an increasingly popular area of research in recent years. Dynamic assessment of driver behavior includes continuous monitoring that allows you to determine functional states, in contrast to modern driver monitoring systems, which assess conditions such as drowsiness and impaired attention for a short (1-10 s) time interval. Such systems allow us to talk about physiological, but not neurophysiological monitoring, which allows monitoring the functional state of fatigue. Therefore, it makes sense to monitor the driver’s state of fatigue of, as well as to warn them in a timely manner to avoid collisions with other vehicles. In the article, a study was carried out and an analysis of the ways to obtain the appropriate characteristics from a person, with the help of which it is possible to determine his functional state of fatigue. As a result of the analysis of the sources, the most common methods for determining the functional state of the driver were selected. Further, the sources found were classified according to the most common methods for obtaining significant characteristics of the functional state of the driver. As a result, a comparative analysis was made, demonstrating the capabilities of modern systems of this class.
{"title":"Analysis of current research in the field of detecting driver fatigue in the vehicle cab","authors":"Alexandr Bulygin, A. Kashevnik","doi":"10.17212/2782-2001-2021-3-19-36","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-3-19-36","url":null,"abstract":"The article analyzes the methods of detecting driver fatigue which are described in modern literature. There are a great variety of methods for assessing the functional state of a person. A functional state is an integral set of characteristics of those functions and qualities of a person that directly or indirectly determine the performance of any activity. The physical and mental state of a person, the success of his work, training, creativity depends on the functional state of the organism. The assessment of dynamic driver behavior has become an increasingly popular area of research in recent years. Dynamic assessment of driver behavior includes continuous monitoring that allows you to determine functional states, in contrast to modern driver monitoring systems, which assess conditions such as drowsiness and impaired attention for a short (1-10 s) time interval. Such systems allow us to talk about physiological, but not neurophysiological monitoring, which allows monitoring the functional state of fatigue. Therefore, it makes sense to monitor the driver’s state of fatigue of, as well as to warn them in a timely manner to avoid collisions with other vehicles. In the article, a study was carried out and an analysis of the ways to obtain the appropriate characteristics from a person, with the help of which it is possible to determine his functional state of fatigue. As a result of the analysis of the sources, the most common methods for determining the functional state of the driver were selected. Further, the sources found were classified according to the most common methods for obtaining significant characteristics of the functional state of the driver. As a result, a comparative analysis was made, demonstrating the capabilities of modern systems of this class.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127233866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-30DOI: 10.17212/2782-2001-2021-3-115-128
Evgeniya E. Istratova, D. Dostovalov
An urgent task in the implementation of electronic document management systems (EDMS) is to expand their functionality through personalization and taking into account individual characteristics of the organization. The article deals with expanding the functionality of EDMS by designing a subsystem for data mining. As part of the study, the principles of formalizing the processes of processing incoming correspondence and organizational and administrative documents, methods of collecting and analyzing data on the work of users with various types of documents through the use of artificial neural networks and a comprehensive assessment of improving the efficiency of the EDMS of an educational institution were studied. Quantitative characteristics that directly affect the process of monitoring the execution of orders have been determined. This is the time spent for creation of the document and execution completeness of the document. A mathematical model of the process of creating documents based on data from the EDMS has been developed. Regression coefficients have been calculated. Analytical dependences of the quality of the developed documents on the time of their execution and volume have been obtained. The scientific novelty of the research lies in the development of an algorithm and software for automating the collection and analysis of data through the use of neural networks in the EDMS. The main scientific results include formalized criteria for documents and stages of their development, the algorithm of the mining subsystem, the developed software for the EDMS of the Lyceum. The results obtained made it possible to identify the types of documents and the stages of their development that are most demanding on the resources necessary for their implementation, which can later be used to find ways to optimally organize work on the preparation of documents of various types.
{"title":"Development of a data mining subsystem for the citeck electronic document management system","authors":"Evgeniya E. Istratova, D. Dostovalov","doi":"10.17212/2782-2001-2021-3-115-128","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-3-115-128","url":null,"abstract":"An urgent task in the implementation of electronic document management systems (EDMS) is to expand their functionality through personalization and taking into account individual characteristics of the organization. The article deals with expanding the functionality of EDMS by designing a subsystem for data mining. As part of the study, the principles of formalizing the processes of processing incoming correspondence and organizational and administrative documents, methods of collecting and analyzing data on the work of users with various types of documents through the use of artificial neural networks and a comprehensive assessment of improving the efficiency of the EDMS of an educational institution were studied. Quantitative characteristics that directly affect the process of monitoring the execution of orders have been determined. This is the time spent for creation of the document and execution completeness of the document. A mathematical model of the process of creating documents based on data from the EDMS has been developed. Regression coefficients have been calculated. Analytical dependences of the quality of the developed documents on the time of their execution and volume have been obtained. The scientific novelty of the research lies in the development of an algorithm and software for automating the collection and analysis of data through the use of neural networks in the EDMS. The main scientific results include formalized criteria for documents and stages of their development, the algorithm of the mining subsystem, the developed software for the EDMS of the Lyceum. The results obtained made it possible to identify the types of documents and the stages of their development that are most demanding on the resources necessary for their implementation, which can later be used to find ways to optimally organize work on the preparation of documents of various types.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127145025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-30DOI: 10.17212/2782-2001-2021-3-143-153
A. Sheet
In this paper the PID controller and the Fuzzy Logic Controller (FLC) are used to control the speed of separately excited DC motors. The proportional, integral and derivate (KP, KI, KD) gains of the PID controller are adjusted according to Fuzzy Logic rules. The FLC cotroller is designed according to fuzzy rules so that the system is fundamentally robust. Twenty-five fuzzy rules for self-tuning of each parameter of the PID controller are considered. The FLC has two inputs; the first one is the motor speed error (the difference between the reference and actual speed) and the second one is a change in the speed error (speed error derivative). The output of the FLC, i.e. the parameters of the PID controller, are used to control the speed of the separately excited DC Motor. This study shows that the precisiom feature of the PID controllers and the flexibllity feature of the fuzzy controller are presented in the fuzzy self-tuning PID controller. The fuzzy self – tuning approach implemented on the conventional PID structure improved the dynamic and static response of the system. The salient features of both conventional and fuzzy self-tuning controller outputs are explored by simulation using MATLAB. The simulation results demonstrate that the proposed self-tuned PID controller i.plementd a good dynamic behavior of the DC motor i.e. perfect speed tracking with a settling time, minimum overshoot and minimum steady state errorws.
{"title":"Optimization of DC motor speed control based on fuzzy logic-PID controller","authors":"A. Sheet","doi":"10.17212/2782-2001-2021-3-143-153","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-3-143-153","url":null,"abstract":"In this paper the PID controller and the Fuzzy Logic Controller (FLC) are used to control the speed of separately excited DC motors. The proportional, integral and derivate (KP, KI, KD) gains of the PID controller are adjusted according to Fuzzy Logic rules. The FLC cotroller is designed according to fuzzy rules so that the system is fundamentally robust. Twenty-five fuzzy rules for self-tuning of each parameter of the PID controller are considered. The FLC has two inputs; the first one is the motor speed error (the difference between the reference and actual speed) and the second one is a change in the speed error (speed error derivative). The output of the FLC, i.e. the parameters of the PID controller, are used to control the speed of the separately excited DC Motor. This study shows that the precisiom feature of the PID controllers and the flexibllity feature of the fuzzy controller are presented in the fuzzy self-tuning PID controller. The fuzzy self – tuning approach implemented on the conventional PID structure improved the dynamic and static response of the system. The salient features of both conventional and fuzzy self-tuning controller outputs are explored by simulation using MATLAB. The simulation results demonstrate that the proposed self-tuned PID controller i.plementd a good dynamic behavior of the DC motor i.e. perfect speed tracking with a settling time, minimum overshoot and minimum steady state errorws.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122058547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-30DOI: 10.17212/2782-2001-2021-3-129-142
E. S. Chetvertakova, E. Chimitova
This paper considers the Wiener degradation model with random effects. Random-effect models take into account the unit-to-unit variability of the degradation index. It is assumed that a random parameter has a truncated normal distribution. During the research, the expression for the maximum likelihood estimates and the reliability function has been obtained. Two statistical tests have been proposed to reveal the existence of random effects in degradation data corresponding to the Wiener degradation model. The first test is a well-known likelihood ratio test, and the second one is based on the variance estimate of the random parameter. These tests have been compared in terms of power with the Monte-Carlo simulation method. The result of the research has shown that the criterion based on the variance estimate of the random parameter is more powerful than the likelihood ratio test in the case of the considered pairs of competing hypotheses. An example of the analysis using the proposed tests for the turbofan engine degradation data has been considered. The data set includes the measurements recorded from 18 sensors for 100 engines. Before constructing the degradation model, the single degradation index has been obtained using the principal component method. The hypothesis of the random effect insignificance in the model has been rejected for both tests. It has been shown that the random-effect Wiener degradation model describes the failure time distribution more accurately than the fixed-effect Wiener degradation model.
{"title":"Testing significance of random effects for the Wiener degradation model","authors":"E. S. Chetvertakova, E. Chimitova","doi":"10.17212/2782-2001-2021-3-129-142","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-3-129-142","url":null,"abstract":"This paper considers the Wiener degradation model with random effects. Random-effect models take into account the unit-to-unit variability of the degradation index. It is assumed that a random parameter has a truncated normal distribution. During the research, the expression for the maximum likelihood estimates and the reliability function has been obtained. Two statistical tests have been proposed to reveal the existence of random effects in degradation data corresponding to the Wiener degradation model. The first test is a well-known likelihood ratio test, and the second one is based on the variance estimate of the random parameter. These tests have been compared in terms of power with the Monte-Carlo simulation method. The result of the research has shown that the criterion based on the variance estimate of the random parameter is more powerful than the likelihood ratio test in the case of the considered pairs of competing hypotheses. An example of the analysis using the proposed tests for the turbofan engine degradation data has been considered. The data set includes the measurements recorded from 18 sensors for 100 engines. Before constructing the degradation model, the single degradation index has been obtained using the principal component method. The hypothesis of the random effect insignificance in the model has been rejected for both tests. It has been shown that the random-effect Wiener degradation model describes the failure time distribution more accurately than the fixed-effect Wiener degradation model.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117089708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-47-66
B. Lemeshko, S. Lemeshko
It is argued that in most cases two reasons underlie the incorrect application of nonparametric goodness-of-fit tests in various applications. The first reason is that when testing composite hypotheses and evaluating the parameters of the law for the analyzed sample, classical results associated with testing simple hypotheses are used. When testing composite hypotheses, the distributions of goodness-of-fit statistics are influenced by the form of the observed law F(x, q) corresponding to the hypothesis being tested, by the type and number of estimated parameters, by the estimation method, and in some cases by the value of the shape parameter. The paper shows the influence of all mentiomed factors on the distribution of test statistics. It is emphasized that, when testing composite hypotheses, the neglect, of the fact that the test has lost the property of “freedom from distribution” leads to an increase in the probability of the 2nd kind errors. It is shown that the distribution of the statistics of the test necessary for the formation of a conclusion about the results of testing a composite hypothesis can be found using simulation in an interactive mode directly in the process of testing. The second reason is associated with the presence of round-off errors which can significantly change the distributions of test statistics. The paper shows that asymptotic results when testing simple and composite hypotheses can be used with round -off errors D much less than the standard deviation s of the distribution law of measurement errors and sample sizes n not exceeding some maximum values. For sample sizes larger than these maximum values, the real distributions of the test statistics deviate from asymptotic ones towards larger statistics values. In such situations, the use of asymptotic distributions to arrive at a conclusion about the test results leads to an increase in the probabilities of errors of the 1st kind (to the rejection of a valid hypothesis being tested). It is shown that when the round-off errors and s are commensurable, the distributions of the test statistics deviate from the asymptotic distributions for small n. And as n grows, the situation only gets worse. In the paper, changes in the distributions of statistics under the influence of rounding are demonstrated both when testing both simple and composite hypotheses. It is shown that the only way out that ensures the correctness of conclusions according to the applied tests in such non-standard conditions is the use of real distributions of statistics. This task can be solved interactively (in the process of verification) and rely on computer research technologies and the apparatus of mathematical statistics.
{"title":"Problems of nonparametric goodness-of-fit test application in tasks of measurement results processing","authors":"B. Lemeshko, S. Lemeshko","doi":"10.17212/2782-2001-2021-2-47-66","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-47-66","url":null,"abstract":"It is argued that in most cases two reasons underlie the incorrect application of nonparametric goodness-of-fit tests in various applications. The first reason is that when testing composite hypotheses and evaluating the parameters of the law for the analyzed sample, classical results associated with testing simple hypotheses are used. When testing composite hypotheses, the distributions of goodness-of-fit statistics are influenced by the form of the observed law F(x, q) corresponding to the hypothesis being tested, by the type and number of estimated parameters, by the estimation method, and in some cases by the value of the shape parameter. The paper shows the influence of all mentiomed factors on the distribution of test statistics. It is emphasized that, when testing composite hypotheses, the neglect, of the fact that the test has lost the property of “freedom from distribution” leads to an increase in the probability of the 2nd kind errors. It is shown that the distribution of the statistics of the test necessary for the formation of a conclusion about the results of testing a composite hypothesis can be found using simulation in an interactive mode directly in the process of testing. The second reason is associated with the presence of round-off errors which can significantly change the distributions of test statistics. The paper shows that asymptotic results when testing simple and composite hypotheses can be used with round -off errors D much less than the standard deviation s of the distribution law of measurement errors and sample sizes n not exceeding some maximum values. For sample sizes larger than these maximum values, the real distributions of the test statistics deviate from asymptotic ones towards larger statistics values. In such situations, the use of asymptotic distributions to arrive at a conclusion about the test results leads to an increase in the probabilities of errors of the 1st kind (to the rejection of a valid hypothesis being tested). It is shown that when the round-off errors and s are commensurable, the distributions of the test statistics deviate from the asymptotic distributions for small n. And as n grows, the situation only gets worse. In the paper, changes in the distributions of statistics under the influence of rounding are demonstrated both when testing both simple and composite hypotheses. It is shown that the only way out that ensures the correctness of conclusions according to the applied tests in such non-standard conditions is the use of real distributions of statistics. This task can be solved interactively (in the process of verification) and rely on computer research technologies and the apparatus of mathematical statistics.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115578795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-83-94
Zafar Usmanov, Abdunabi A. Kosimov
Using the example of a model collection of 10 texts in five languages (English, German, Spanish, Italian, and French) using Latin graphics, the article establishes the applicability of the γ-classifier for automatic recognition of the language of a work based on the frequency of 26 common Latin alphabetic letters. The mathematical model of the γ-classifier is represented as a triad. Its first component is a digital portrait (DP) of the text - the distribution of the frequency of alphabetic unigrams in the text; the second component is formulas for calculating the distances between the DP texts and the third is a machine learning algorithm that implements the hypothesis of “homogeneity” of works written in one language and “heterogeneity” of works written in different languages. The tuning of the algorithm using a table of paired distances between all products of the model collection consisted in determining an optimal value of the real parameter γ, for which the error of violation of the “homogeneity” hypothesis is minimized. The γ-classifier trained on the texts of the model collection showed a high, 100% accuracy in recognizing the languages of the works. For testing the classifier, an additional six random texts were selected, of which five were in the same languages as the texts of the model collection. By the method of the nearest (in terms of distance) neighbor, all new texts confirmed their homogeneity with the corresponding pairs of monolingual works. The sixth text in Romanian showed its heterogeneity in relation to all elements of the collection. At the same time, it showed closeness in minimum distances, first of all, to two texts in Spanish and then to two works in Italian.
{"title":"Testing the classifier adapted to recognize the languages of works based on the Latin alphabet","authors":"Zafar Usmanov, Abdunabi A. Kosimov","doi":"10.17212/2782-2001-2021-2-83-94","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-83-94","url":null,"abstract":"Using the example of a model collection of 10 texts in five languages (English, German, Spanish, Italian, and French) using Latin graphics, the article establishes the applicability of the γ-classifier for automatic recognition of the language of a work based on the frequency of 26 common Latin alphabetic letters. The mathematical model of the γ-classifier is represented as a triad. Its first component is a digital portrait (DP) of the text - the distribution of the frequency of alphabetic unigrams in the text; the second component is formulas for calculating the distances between the DP texts and the third is a machine learning algorithm that implements the hypothesis of “homogeneity” of works written in one language and “heterogeneity” of works written in different languages. The tuning of the algorithm using a table of paired distances between all products of the model collection consisted in determining an optimal value of the real parameter γ, for which the error of violation of the “homogeneity” hypothesis is minimized. The γ-classifier trained on the texts of the model collection showed a high, 100% accuracy in recognizing the languages of the works. For testing the classifier, an additional six random texts were selected, of which five were in the same languages as the texts of the model collection. By the method of the nearest (in terms of distance) neighbor, all new texts confirmed their homogeneity with the corresponding pairs of monolingual works. The sixth text in Romanian showed its heterogeneity in relation to all elements of the collection. At the same time, it showed closeness in minimum distances, first of all, to two texts in Spanish and then to two works in Italian.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-95-120
Dmitry Iakubovsky, D. Krupenev, D. Boyarkin
A steady trend towards the development of electric power systems leads to their continuous enlargement and sophistication. As a result, new ways of their control appear. In this regard, the existing models and complexes for adequacy assessment may work inadequately and ineffectively in terms of the obtained results adequacy. To assess the current state of the existing models and complexes, we reviewed and analyzed the domestic and foreign software and computer systems. In particular, we considered mathematical models of minimizing the power shortage. This work is based on the problem of modifying mathematical models of minimizing the power shortage used in adequacy assessment of the electric power systems of one of the complexes under consideration. As a modification of mathematical models, it is proposed to exclude the existing method of using the line capacities and start use correct accounting for the maximum permissible active power flow in controlled sections. The experimental part reflected in the paper concerns the testing of options for models to minimize the power shortage, as well as the proposed modifications on various systems, including those consisting of three and seven reliability zones with a variable number of controlled sections and power lines included in them. The results of the study have shown that the proposed modifications are efficient and can be used in the future. The authors also obtained the most adequate results in terms of the physical laws of electric power system operation due to the model of minimizing the power shortage with quadratic losses which takes into account the limitations of power transmission over controlled sections.
{"title":"A minimization model of the power shortage of electric power systems with regard to the restrictions on controlled sections","authors":"Dmitry Iakubovsky, D. Krupenev, D. Boyarkin","doi":"10.17212/2782-2001-2021-2-95-120","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-95-120","url":null,"abstract":"A steady trend towards the development of electric power systems leads to their continuous enlargement and sophistication. As a result, new ways of their control appear. In this regard, the existing models and complexes for adequacy assessment may work inadequately and ineffectively in terms of the obtained results adequacy. To assess the current state of the existing models and complexes, we reviewed and analyzed the domestic and foreign software and computer systems. In particular, we considered mathematical models of minimizing the power shortage. This work is based on the problem of modifying mathematical models of minimizing the power shortage used in adequacy assessment of the electric power systems of one of the complexes under consideration. As a modification of mathematical models, it is proposed to exclude the existing method of using the line capacities and start use correct accounting for the maximum permissible active power flow in controlled sections. The experimental part reflected in the paper concerns the testing of options for models to minimize the power shortage, as well as the proposed modifications on various systems, including those consisting of three and seven reliability zones with a variable number of controlled sections and power lines included in them. The results of the study have shown that the proposed modifications are efficient and can be used in the future. The authors also obtained the most adequate results in terms of the physical laws of electric power system operation due to the model of minimizing the power shortage with quadratic losses which takes into account the limitations of power transmission over controlled sections.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128913369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-135-145
Daria Borovikova, Oleg Grishin, Anastasia Nenko, Anton Yupashevsky, Anna S. Kazmina, Artem V. Markov, Konstantin Metsler
In recent years, there has been a dramatic increase in the number of people suffering from functional disorders of voice, usually caused by a psychoemotional stress. Such disorders bring significant discomfort to a person's life as they reduce their communication and social adaptation capacitty, which in turn increases the psychoemotional load. As a result, functional disorders are fixed by the vicious circle mechanism o and can be transformed into the pathology of the speech apparatus. The main method of diagnosis remains expert assessment, which directly depends on the professional skills of a specialist in working with voice. In this connection, the issue of developing such systems for diagnosing voice-speech disorders that would allow for an objective assessment based on the processing of voice-speech characteristics, as well as to identify the violation in time and prevent the development of pathology, is relevant. Such methods and systems can be useful both for diagnostics and for monitoring the effectiveness of voice therapy. The existing methods of hardware diagnostics have not yet found their application in practice due to their inconsistency with the results of expert evaluation. In this paper, we propose a new concept of hardware and software complex for the analysis of voice based on acoustic characteristics of a set of harmonics of the voice signal. A VASA (Voice and Speech Analyzing system) complex has been developed that provides an automatic analysis of the amplitudes of the first 16 harmonics. The tests performed on three volunteers showed a high level of reproducibility and repeatability (within 10 % < %R&R < 30 %), sufficient for conducting comparative studies on healthy people and people with functional speech disorders.
{"title":"Development of a hardware and software complex for speech analysis and correction","authors":"Daria Borovikova, Oleg Grishin, Anastasia Nenko, Anton Yupashevsky, Anna S. Kazmina, Artem V. Markov, Konstantin Metsler","doi":"10.17212/2782-2001-2021-2-135-145","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-135-145","url":null,"abstract":"In recent years, there has been a dramatic increase in the number of people suffering from functional disorders of voice, usually caused by a psychoemotional stress. Such disorders bring significant discomfort to a person's life as they reduce their communication and social adaptation capacitty, which in turn increases the psychoemotional load. As a result, functional disorders are fixed by the vicious circle mechanism o and can be transformed into the pathology of the speech apparatus. The main method of diagnosis remains expert assessment, which directly depends on the professional skills of a specialist in working with voice. In this connection, the issue of developing such systems for diagnosing voice-speech disorders that would allow for an objective assessment based on the processing of voice-speech characteristics, as well as to identify the violation in time and prevent the development of pathology, is relevant. Such methods and systems can be useful both for diagnostics and for monitoring the effectiveness of voice therapy. The existing methods of hardware diagnostics have not yet found their application in practice due to their inconsistency with the results of expert evaluation. In this paper, we propose a new concept of hardware and software complex for the analysis of voice based on acoustic characteristics of a set of harmonics of the voice signal. A VASA (Voice and Speech Analyzing system) complex has been developed that provides an automatic analysis of the amplitudes of the first 16 harmonics. The tests performed on three volunteers showed a high level of reproducibility and repeatability (within 10 % < %R&R < 30 %), sufficient for conducting comparative studies on healthy people and people with functional speech disorders.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117102768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-35-46
D. Vagin
The structure and features of a software package for 3D inversion of geophysical data are considered. The presented software package is focused on solving direct and inverse problems of electrical exploration and engineering geophysics. In addition to the parameters that determine physical properties of the medium, the software package allows you to restore the geometry parameters of the geophysical model, namely layer reliefs and boundaries of three-dimensional inclusions. The inclusions can be in the form of arbitrary hexagons or prisms with a polygonal base. The software package consists of four main subsystems: an interface, subsystems for solving direct and inverse problems, and a client-server part for performing calculations on remote computing nodes. The graphical interface consists of geophysicist-oriented pre- and postprocessor modules that allow you to describe the problem and present the results of its solution in user-friendly terms. To solve direct problems, the finite element method and the technology for dividing the field into normal and anomalous components are used. At the same time, special methods of discretization of the computational domain are used, which make it possible to take into account both the complex three-dimensional structure of the environment and the presence of man-made objects (wells) in the computational domain. To increase the efficiency of solving direct problems, nonconforming grids with cells in the form of arbitrary hexahedrons are used. Methods for efficient calculation of derivatives (with respect to these parameters) necessary for solving inverse problems by the Gauss-Newton method are also described for the geometry parameters. The main idea for efficient derivatives computation is to identify the effect of changing the value of the parameter (used to compute the value of the generalized derivative) on the problem. The main actions performed by the subsystem for solving inverse problems and the features associated with the processing of geometry parameters are described.
{"title":"The structure and features of the software for geophysical geometrical 3D inversions","authors":"D. Vagin","doi":"10.17212/2782-2001-2021-2-35-46","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-35-46","url":null,"abstract":"The structure and features of a software package for 3D inversion of geophysical data are considered. The presented software package is focused on solving direct and inverse problems of electrical exploration and engineering geophysics. In addition to the parameters that determine physical properties of the medium, the software package allows you to restore the geometry parameters of the geophysical model, namely layer reliefs and boundaries of three-dimensional inclusions. The inclusions can be in the form of arbitrary hexagons or prisms with a polygonal base. The software package consists of four main subsystems: an interface, subsystems for solving direct and inverse problems, and a client-server part for performing calculations on remote computing nodes. The graphical interface consists of geophysicist-oriented pre- and postprocessor modules that allow you to describe the problem and present the results of its solution in user-friendly terms. To solve direct problems, the finite element method and the technology for dividing the field into normal and anomalous components are used. At the same time, special methods of discretization of the computational domain are used, which make it possible to take into account both the complex three-dimensional structure of the environment and the presence of man-made objects (wells) in the computational domain. To increase the efficiency of solving direct problems, nonconforming grids with cells in the form of arbitrary hexahedrons are used. Methods for efficient calculation of derivatives (with respect to these parameters) necessary for solving inverse problems by the Gauss-Newton method are also described for the geometry parameters. The main idea for efficient derivatives computation is to identify the effect of changing the value of the parameter (used to compute the value of the generalized derivative) on the problem. The main actions performed by the subsystem for solving inverse problems and the features associated with the processing of geometry parameters are described.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123007424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.17212/2782-2001-2021-2-19-34
Yuri Bulatov, A. Kryukov
The power industry is currently actively developing the field related to the use of distributed generation plants located near the power receiving devices of consumers. At the same time, the introduction of distributed generation plants causes a lot of engineering problems which need solutions. One of them is the optimization of the settings of automatic voltage regulators (AVR) and speed regulators (ASR) of synchronous generators in all possible operating modes. This requires the use of complex models of power supply systems, distributed generation plants and their regulators, as well as labor-intensive calculations that take into account a large number of interrelated parameters. However, there is another approach based on the use of predictive controllers. In this case only one parameter is needed for linear predictive models.The article describes a method for constructing and tuning the proposed predictive ASR synchronous generator, as well as computer models of distributed generation plants used in research. The purpose of the research was to determine cyber security of power supply systems equipped with various distributed generation plants with predictive speed controllers that can be implemented on the basis of the microprocessor technology. The studies were carried out in the MATLAB system using the Simulink and SymPowerSystems simulation packages on computer models of distributed generation plants with one turbine generator operating at a dedicated load, as well as a group of hydrogenerators connected to a high-power electric power system. The simulation results showed the effectiveness of the proposed predictive control algorithms, as well as the fact that their cyber security can be increased by introducing hardware restrictions on the range of changes in the time constant of the predictive link.
{"title":"Study of cyber security of predictive control algorithms for distributed generation plants","authors":"Yuri Bulatov, A. Kryukov","doi":"10.17212/2782-2001-2021-2-19-34","DOIUrl":"https://doi.org/10.17212/2782-2001-2021-2-19-34","url":null,"abstract":"The power industry is currently actively developing the field related to the use of distributed generation plants located near the power receiving devices of consumers. At the same time, the introduction of distributed generation plants causes a lot of engineering problems which need solutions. One of them is the optimization of the settings of automatic voltage regulators (AVR) and speed regulators (ASR) of synchronous generators in all possible operating modes. This requires the use of complex models of power supply systems, distributed generation plants and their regulators, as well as labor-intensive calculations that take into account a large number of interrelated parameters. However, there is another approach based on the use of predictive controllers. In this case only one parameter is needed for linear predictive models.The article describes a method for constructing and tuning the proposed predictive ASR synchronous generator, as well as computer models of distributed generation plants used in research. The purpose of the research was to determine cyber security of power supply systems equipped with various distributed generation plants with predictive speed controllers that can be implemented on the basis of the microprocessor technology. The studies were carried out in the MATLAB system using the Simulink and SymPowerSystems simulation packages on computer models of distributed generation plants with one turbine generator operating at a dedicated load, as well as a group of hydrogenerators connected to a high-power electric power system. The simulation results showed the effectiveness of the proposed predictive control algorithms, as well as the fact that their cyber security can be increased by introducing hardware restrictions on the range of changes in the time constant of the predictive link.","PeriodicalId":292298,"journal":{"name":"Analysis and data processing systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128909554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}