Pub Date : 2022-10-17DOI: 10.15588/1607-3274-2022-3-14
S. Prykhodko, A. Pukhalevych, K. Prykhodko, L. Makarova
Context. The problem of estimating the duration of software development in Java for personal computers (PC) is important because, first, failed duration estimating is often the main contributor to failed software projects, second, Java is a popular language, and, third, a personal computer is a widespread multi-purpose computer. The object of the study is the process of estimating the duration of software development in Java for PC. The subject of the study is the nonlinear regression models to estimate the duration of software development in Java for PC. Objective. The goal of the work is to build nonlinear regression models for estimating the duration of software development in Java for PC based on the normalizing transformations and deleting outliers in data to increase the confidence of the estimation in comparison to the ISBSG model for the PC platform. Method. The models, confidence, and prediction intervals of nonlinear regressions to estimate the duration of software development in Java for PC are constructed based on the normalizing transformations for non-Gaussian data with the help of appropriate techniques. The techniques to build the models, confidence, and prediction intervals of nonlinear regressions are based on normalizing transformations. Also, we apply outlier removal for model construction. In general, the above leads to a reduction of the mean magnitude of relative error, the widths of the confidence, and prediction intervals in comparison to nonlinear models constructed without outlier removal application in the model construction process. Results. A comparison of the model based on the decimal logarithm transformation with the nonlinear regression models based on the Johnson (for the SB family) and Box-Cox transformations as both univariate and bivariate ones has been performed. Conclusions. The nonlinear regression model to estimate the duration of software development in Java for PC is constructed based on the decimal logarithm transformation. This model, in comparison with other nonlinear regression models, has smaller widths of the confidence and prediction intervals for effort values that are bigger than 900 person-hours. The prospects for further research may include the application of bivariate normalizing transformations and data sets to construct the nonlinear regression models for estimating the duration of software development in other languages for PC and other platforms, for example, mainframe.
{"title":"NONLINEAR REGRESSION MODELS FOR ESTIMATING THE DURATION OF SOFTWARE DEVELOPMENT IN JAVA FOR PC BASED ON THE 2021 ISBSG DATA","authors":"S. Prykhodko, A. Pukhalevych, K. Prykhodko, L. Makarova","doi":"10.15588/1607-3274-2022-3-14","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-14","url":null,"abstract":"Context. The problem of estimating the duration of software development in Java for personal computers (PC) is important because, first, failed duration estimating is often the main contributor to failed software projects, second, Java is a popular language, and, third, a personal computer is a widespread multi-purpose computer. The object of the study is the process of estimating the duration of software development in Java for PC. The subject of the study is the nonlinear regression models to estimate the duration of software development in Java for PC. \u0000Objective. The goal of the work is to build nonlinear regression models for estimating the duration of software development in Java for PC based on the normalizing transformations and deleting outliers in data to increase the confidence of the estimation in comparison to the ISBSG model for the PC platform. \u0000Method. The models, confidence, and prediction intervals of nonlinear regressions to estimate the duration of software development in Java for PC are constructed based on the normalizing transformations for non-Gaussian data with the help of appropriate techniques. The techniques to build the models, confidence, and prediction intervals of nonlinear regressions are based on normalizing transformations. Also, we apply outlier removal for model construction. In general, the above leads to a reduction of the mean magnitude of relative error, the widths of the confidence, and prediction intervals in comparison to nonlinear models constructed without outlier removal application in the model construction process. \u0000Results. A comparison of the model based on the decimal logarithm transformation with the nonlinear regression models based on the Johnson (for the SB family) and Box-Cox transformations as both univariate and bivariate ones has been performed. \u0000Conclusions. The nonlinear regression model to estimate the duration of software development in Java for PC is constructed based on the decimal logarithm transformation. This model, in comparison with other nonlinear regression models, has smaller widths of the confidence and prediction intervals for effort values that are bigger than 900 person-hours. The prospects for further research may include the application of bivariate normalizing transformations and data sets to construct the nonlinear regression models for estimating the duration of software development in other languages for PC and other platforms, for example, mainframe.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"25 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85107064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-10
O. V. Orlovskiy, K. Sohrab, S. Ostapov, K. P. Hazdyuk, L. Shumylyak
Context. Online platforms and environments continue to generate ever-increasing content. The task of automating the moderation of user-generated content continues to be relevant. Of particular note are cases in which, for one reason or another, there is a very small amount of data to teach the classifier. To achieve results under such conditions, it is important to involve the classifier pre-trained models, which were trained on a large amount of data from a wide range. This paper deals with the use of the pre-trained multilingual Universal Sentence Encoder (USE) model as a component of the developed classifier and the affect of hyperparameters on the classification accuracy when learning on a small data amount (~ 0.05% of the dataset). Objective. The goal of this paper is the investigation of the pre-trained multilingual model and optimal hyperparameters influence for learning the text data classifier on the classification result. Method. To solve this problem, a relatively new approach to few-shot learning has recently been used – learning with a relatively small number of examples. Since text data is still the dominant way of transmitting information, the study of the possibilities of constructing a classifier of text data when learning from a small number of examples (~ 0.002–0.05% of the data set) is an actual problem. Results. It is shown that even with a small number of examples for learning (36 per class) due to the use of USE and optimal configuration in learning can achieve high accuracy of classification on English and Russian data, which is extremely important when it is impossible to collect your own large data set. The influence of the approach using USE and a set of different configurations of hyperparameters on the result of the text data classifier on the example of English and Russian data sets is evaluated. Conclusions. During the experiments, a significant degree of relevance of the correct selection of hyperparameters is shown. In particular, this paper considered the batch size, optimizer, number of learning epochs and the percentage of data from the set taken to train the classifier. In the process of experimentation, the optimal configuration of hyperparameters was selected, according to which 86.46% accuracy of classification on the Russian-language data set and 91.13% on the English-language data, respectively, can be achieved in ten seconds of training (training time can be significantly affected by technical means used).
{"title":"MULTILINGUAL TEXT CLASSIFIER USING PRE-TRAINED UNIVERSAL SENTENCE ENCODER MODEL","authors":"O. V. Orlovskiy, K. Sohrab, S. Ostapov, K. P. Hazdyuk, L. Shumylyak","doi":"10.15588/1607-3274-2022-3-10","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-10","url":null,"abstract":"Context. Online platforms and environments continue to generate ever-increasing content. The task of automating the moderation of user-generated content continues to be relevant. Of particular note are cases in which, for one reason or another, there is a very small amount of data to teach the classifier. To achieve results under such conditions, it is important to involve the classifier pre-trained models, which were trained on a large amount of data from a wide range. This paper deals with the use of the pre-trained multilingual Universal Sentence Encoder (USE) model as a component of the developed classifier and the affect of hyperparameters on the classification accuracy when learning on a small data amount (~ 0.05% of the dataset). \u0000Objective. The goal of this paper is the investigation of the pre-trained multilingual model and optimal hyperparameters influence for learning the text data classifier on the classification result. \u0000Method. To solve this problem, a relatively new approach to few-shot learning has recently been used – learning with a relatively small number of examples. Since text data is still the dominant way of transmitting information, the study of the possibilities of constructing a classifier of text data when learning from a small number of examples (~ 0.002–0.05% of the data set) is an actual problem. \u0000Results. It is shown that even with a small number of examples for learning (36 per class) due to the use of USE and optimal configuration in learning can achieve high accuracy of classification on English and Russian data, which is extremely important when it is impossible to collect your own large data set. The influence of the approach using USE and a set of different configurations of hyperparameters on the result of the text data classifier on the example of English and Russian data sets is evaluated. \u0000Conclusions. During the experiments, a significant degree of relevance of the correct selection of hyperparameters is shown. In particular, this paper considered the batch size, optimizer, number of learning epochs and the percentage of data from the set taken to train the classifier. In the process of experimentation, the optimal configuration of hyperparameters was selected, according to which 86.46% accuracy of classification on the Russian-language data set and 91.13% on the English-language data, respectively, can be achieved in ten seconds of training (training time can be significantly affected by technical means used).","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"51 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85813977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-4
V. Kuzmin, R. Khrashchevskyi, M. Kulik, O. Ivanets, M. Zaliskyi, Yu. V. Petrova
Context. The problem of approximation of empirical data in the decision-making system in safety management.. The object of the study was to verify the adequate coefficients of the mathematical model for data approximation using information technology. Objective. The goal of the work is the creation adequate math-ematical model using information technology on the bases analyze different approaches for approximating empirical data an that can be used to predict the current state of the operator in the flight safety system.. Method. A comparative analysis of the description of the transformation of information indicators with a non-standard structure. The following models of transformation of information indicators with similar visual representation are selected for comparison: parabolas of the second and third order, single regression and regression with jumps. It is proposed to use new approaches for approximation, based on the use of the criterion proposed by Kuzmin and the Heaviside function. The adequacy of the approximation was checked using these criteria, which allowed to choose an adequate mathematical model to describe the transformation of information indicators. The stages of obtaining a mathematical model were as follows: determining the minimum sum of squares of deviations for all information indicators simultaneously; use of the Heaviside function; optimization of the abscissa axis in certain areas; use of the linearity test. The obtained mathematical model adequately describes the process of transformation of information indicators, which will allow the process of forecasting changes in medical and biological indicators of operators in the performance of professional duties in aviation, as one of the methods of determining the human factor in a proactive approach in flight safety. Results. The results of the study can be used during the construction of mathematical models to describe empirical data of this kind. Conclusions. Experimental studies have suggested recommending the use of three-segment linear regression with jumps as an adequate mathematical model that can be used to formalize the description of empirical data with non-standard structure and can be used in practice to build models for predicting operator dysfunction as one of the causes of adverse events in aviation. Prospects for further research may be the creation of a multiparameter mathematical model that will predict the violation of the functional state of the operator by informative parameters, as well as experimental study of proposed mathematical approaches for a wide range of practical problems of different nature and dimension.
{"title":"MATHEMATICAL MODEL FOR DECISION MAKING SYSTEM BASED ON THREE-SEGMENTED LINEAR REGRESSION","authors":"V. Kuzmin, R. Khrashchevskyi, M. Kulik, O. Ivanets, M. Zaliskyi, Yu. V. Petrova","doi":"10.15588/1607-3274-2022-3-4","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-4","url":null,"abstract":"Context. The problem of approximation of empirical data in the decision-making system in safety management.. The object of the study was to verify the adequate coefficients of the mathematical model for data approximation using information technology. \u0000Objective. The goal of the work is the creation adequate math-ematical model using information technology on the bases analyze different approaches for approximating empirical data an that can be used to predict the current state of the operator in the flight safety system.. \u0000Method. A comparative analysis of the description of the transformation of information indicators with a non-standard structure. The following models of transformation of information indicators with similar visual representation are selected for comparison: parabolas of the second and third order, single regression and regression with jumps. It is proposed to use new approaches for approximation, based on the use of the criterion proposed by Kuzmin and the Heaviside function. The adequacy of the approximation was checked using these criteria, which allowed to choose an adequate mathematical model to describe the transformation of information indicators. The stages of obtaining a mathematical model were as follows: determining the minimum sum of squares of deviations for all information indicators simultaneously; use of the Heaviside function; optimization of the abscissa axis in certain areas; use of the linearity test. The obtained mathematical model adequately describes the process of transformation of information indicators, which will allow the process of forecasting changes in medical and biological indicators of operators in the performance of professional duties in aviation, as one of the methods of determining the human factor in a proactive approach in flight safety. \u0000Results. The results of the study can be used during the construction of mathematical models to describe empirical data of this kind. \u0000Conclusions. Experimental studies have suggested recommending the use of three-segment linear regression with jumps as an adequate mathematical model that can be used to formalize the description of empirical data with non-standard structure and can be used in practice to build models for predicting operator dysfunction as one of the causes of adverse events in aviation. \u0000Prospects for further research may be the creation of a multiparameter mathematical model that will predict the violation of the functional state of the operator by informative parameters, as well as experimental study of proposed mathematical approaches for a wide range of practical problems of different nature and dimension.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"14 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81939470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-6
Y. Bodyanskiy, I. Pliss, A. Shafronenko, O. Kalynychenko
Context. The task of clustering – classification without a teacher of data arrays occupies a rather important place in Data Mining. To solve this problem, many approaches have been proposed at the moment, differing from each other in a priori assumptions in the studied and analyzed arrays, in the mathematical apparatus that is the basis of certain methods. The solution of clustering problems is complicated by the large dimension of the vectors of the analyzed observations, their distortion of various types. Objective. The purpose of the work is to introduce a fuzzy clustering procedure that combines the advantages of methods based on the analysis of data distribution densities and their peaks, which are characterized by high speed and can work effectively in conditions of classes that overlapping. Method. The method of fuzzy clustering of data arrays, based on the ideas of analyzing the distribution densities of these data, their peaks, and a confidence fuzzy approach has been introduced. The advantage of the proposed approach is to reduce the time for solving optimization problems related to finding attractors of density functions, since the number of calls to the optimization block is determined not by the volume of the analyzed array, but by the number of density peaks of the same array. Results. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The experimental results confirm the effectiveness of the proposed approach in clustering problems under the condition of cluster intersection and allow us to recommend the proposed method for practical use in solving problems of automatic clustering of large data volumes. Conclusions. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The advantage of the proposed approach is to reduce the time for solving optimization problems related to finding attractors of density functions, since the number of calls to the optimization block is determined not by the volume of the analyzed array, but by the number of density peaks of the same array. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The experimental results confirm the effectiveness of the proposed approach in clustering problems under conditions of overlapping clusters.
{"title":"CREDIBILISTIC FUZZY CLUSTERING BASED ON ANALYSIS OF DATA DISTRIBUTION DENSITY AND THEIR PEAKS","authors":"Y. Bodyanskiy, I. Pliss, A. Shafronenko, O. Kalynychenko","doi":"10.15588/1607-3274-2022-3-6","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-6","url":null,"abstract":"Context. The task of clustering – classification without a teacher of data arrays occupies a rather important place in Data Mining. To solve this problem, many approaches have been proposed at the moment, differing from each other in a priori assumptions in the studied and analyzed arrays, in the mathematical apparatus that is the basis of certain methods. The solution of clustering problems is complicated by the large dimension of the vectors of the analyzed observations, their distortion of various types. \u0000Objective. The purpose of the work is to introduce a fuzzy clustering procedure that combines the advantages of methods based on the analysis of data distribution densities and their peaks, which are characterized by high speed and can work effectively in conditions of classes that overlapping. \u0000Method. The method of fuzzy clustering of data arrays, based on the ideas of analyzing the distribution densities of these data, their peaks, and a confidence fuzzy approach has been introduced. The advantage of the proposed approach is to reduce the time for solving optimization problems related to finding attractors of density functions, since the number of calls to the optimization block is determined not by the volume of the analyzed array, but by the number of density peaks of the same array. \u0000Results. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The experimental results confirm the effectiveness of the proposed approach in clustering problems under the condition of cluster intersection and allow us to recommend the proposed method for practical use in solving problems of automatic clustering of large data volumes. \u0000Conclusions. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The advantage of the proposed approach is to reduce the time for solving optimization problems related to finding attractors of density functions, since the number of calls to the optimization block is determined not by the volume of the analyzed array, but by the number of density peaks of the same array. The method is quite simple in numerical implementation and is not critical to the choice of the optimization procedure. The experimental results confirm the effectiveness of the proposed approach in clustering problems under conditions of overlapping clusters.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"165 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74881669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-7
Tetiana A. Vakaliuk, R. Kukharchuk, O. Zaika, A. V. Riabko
Context. Among the variety of tasks solved by robotics, one can single out a number of those for the solution of which small dimensions of work are desirable and sometimes necessary. To solve such problems, micro-robots with small dimensions are needed, the mass of which allows them to move freely in tight passages, in difficult weather conditions, and remain unnoticed. At the same time, the small dimensions of the microrobot also impose some indirect restrictions; therefore, it is better to use groups of microrobots for the solution of these problems. The efficiency of using groups of microrobots depends on the chosen control strategy and stochastic search algorithms for optimizing the control of a group (swarm) of microrobots. Objective. The purpose of this work is to consider a group of swarm algorithms (methods) belonging to the class of metaheuristics. The group of these algorithms includes, in particular, the ant colony algorithm, the possibilities of which were investigated to solve the traveling salesman problem, which often arises when developing an algorithm for the behavior of a group of microrobots. Method. At the first stage of the study, the main groups of parameters were identified that determine the flow and characterize the state at any time of the ant colony algorithm: input, control, disturbance parameters, output parameters. After identifying the main groups of parameters, an algorithm was developed, the advantage of which lies in scalability, as well as guaranteed convergence, which makes it possible to obtain an optimal solution regardless of the dimension of the graph. At the second stage, an algorithm was developed, the code of which was implemented in the Matlab language. Computer experiments were carried out to determine the influence of input, control, output, and disturbance parameters on the convergence of the algorithm. Attention was paid to the main groups of indicators that determine the direction of the method and characterize the state of the swarm of microrobots at a given time. In the computational experiment, the number of ants placed in the nodes of the network, the amount of pheromone, the number of graph nodes were varied, the number of iterations to find the shortest path, and the execution time of the method were determined. The final test of modeling and performance of the method was carried out. Results. Research has been carried out on the application of the ant algorithm for solving the traveling salesman problem for test graphs with a random arrangement of vertices; for a constant number of vertices and a change in the number of ants, for a constant number of vertices at different values of the coefficient Q; to solve the traveling salesman problem for a constant number of vertices at different values of the pheromone evaporation coefficient p; for a different number of graph vertices. The results showed that ant methods find good traveling salesman routes much faster than clear-cut combinatorial
{"title":"OPTIMIZATION OF SWARM ROBOTICS ALGORITHMS","authors":"Tetiana A. Vakaliuk, R. Kukharchuk, O. Zaika, A. V. Riabko","doi":"10.15588/1607-3274-2022-3-7","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-7","url":null,"abstract":"Context. Among the variety of tasks solved by robotics, one can single out a number of those for the solution of which small dimensions of work are desirable and sometimes necessary. To solve such problems, micro-robots with small dimensions are needed, the mass of which allows them to move freely in tight passages, in difficult weather conditions, and remain unnoticed. At the same time, the small dimensions of the microrobot also impose some indirect restrictions; therefore, it is better to use groups of microrobots for the solution of these problems. The efficiency of using groups of microrobots depends on the chosen control strategy and stochastic search algorithms for optimizing the control of a group (swarm) of microrobots. \u0000Objective. The purpose of this work is to consider a group of swarm algorithms (methods) belonging to the class of metaheuristics. The group of these algorithms includes, in particular, the ant colony algorithm, the possibilities of which were investigated to solve the traveling salesman problem, which often arises when developing an algorithm for the behavior of a group of microrobots. \u0000Method. At the first stage of the study, the main groups of parameters were identified that determine the flow and characterize the state at any time of the ant colony algorithm: input, control, disturbance parameters, output parameters. After identifying the main groups of parameters, an algorithm was developed, the advantage of which lies in scalability, as well as guaranteed convergence, which makes it possible to obtain an optimal solution regardless of the dimension of the graph. At the second stage, an algorithm was developed, the code of which was implemented in the Matlab language. Computer experiments were carried out to determine the influence of input, control, output, and disturbance parameters on the convergence of the algorithm. Attention was paid to the main groups of indicators that determine the direction of the method and characterize the state of the swarm of microrobots at a given time. In the computational experiment, the number of ants placed in the nodes of the network, the amount of pheromone, the number of graph nodes were varied, the number of iterations to find the shortest path, and the execution time of the method were determined. The final test of modeling and performance of the method was carried out. \u0000Results. Research has been carried out on the application of the ant algorithm for solving the traveling salesman problem for test graphs with a random arrangement of vertices; for a constant number of vertices and a change in the number of ants, for a constant number of vertices at different values of the coefficient Q; to solve the traveling salesman problem for a constant number of vertices at different values of the pheromone evaporation coefficient p; for a different number of graph vertices. The results showed that ant methods find good traveling salesman routes much faster than clear-cut combinatorial ","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"46 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89948780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-9
V. Moskalenko, A. Moskalenko, A. Korobov, M. O. Zaretsky
Context. The problem of image classification algorithms vulnerability to destructive perturbations has not yet been definitively resolved and is quite relevant for safety-critical applications. Therefore, object of research is the process of training and inference for image classifier that functioning under influences of destructive perturbations. The subjects of the research are model architecture and training algorithm of image classifier that provide resilience to adversarial attacks, fault injection attacks and concept drift. Objective. Stated research goal is to develop effective model architecture and training algorithm that provide resilience to adversarial attacks, fault injections and concept drift. Method. New training algorithm which combines self-knowledge distillation, information measure maximization, class distribution compactness and interclass gap maximization, data compression based on discretization of feature representation and semi-supervised learning based on consistency regularization is proposed. Results. The model architecture and training algorithm of image classifier were developed. The obtained classifier was tested on the Cifar10 dataset to evaluate its resilience over an interval of 200 mini-batches with a training and test size of mini-batch equals to 128 examples for such perturbations: adversarial black-box L∞-attacks with perturbation levels equal to 1, 3, 5 and 10; inversion of one randomly selected bit in a tensor for 10%, 30%, 50% and 60% randomly selected tensors; addition of one new class; real concept drift between a pair of classes. The effect of the feature space dimensionality on the value of the information criterion of the model performance without perturbations and the value of the integral metric of resilience during the exposure to perturbations is considered. Conclusions. The proposed model architecture and learning algorithm provide absorption of part of the disturbing influence, graceful degradation due to hierarchical classes and adaptive computation, and fast adaptation on a limited amount of labeled data. It is shown that adaptive computation saves up to 40% of resources due to early decision-making in the lower sections of the model, but perturbing influence leads to slowing down, which can be considered as graceful degradation. A multi-section structure trained using knowledge self-distillation principles has been shown to provide more than 5% improvement in the value of the integral mectric of resilience compared to an architecture where the decision is made on the last layer of the model. It is observed that the dimensionality of the feature space noticeably affects the resilience to adversarial attacks and can be chosen as a tradeoff between resilience to perturbations and efficiency without perturbations.
{"title":"IMAGE CLASSIFIER RESILIENT TO ADVERSARIAL ATTACKS, FAULT INJECTIONS AND CONCEPT DRIFT – MODEL ARCHITECTURE AND TRAINING ALGORITHM","authors":"V. Moskalenko, A. Moskalenko, A. Korobov, M. O. Zaretsky","doi":"10.15588/1607-3274-2022-3-9","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-9","url":null,"abstract":"Context. The problem of image classification algorithms vulnerability to destructive perturbations has not yet been definitively resolved and is quite relevant for safety-critical applications. Therefore, object of research is the process of training and inference for image classifier that functioning under influences of destructive perturbations. The subjects of the research are model architecture and training algorithm of image classifier that provide resilience to adversarial attacks, fault injection attacks and concept drift. \u0000Objective. Stated research goal is to develop effective model architecture and training algorithm that provide resilience to adversarial attacks, fault injections and concept drift. \u0000Method. New training algorithm which combines self-knowledge distillation, information measure maximization, class distribution compactness and interclass gap maximization, data compression based on discretization of feature representation and semi-supervised learning based on consistency regularization is proposed. \u0000Results. The model architecture and training algorithm of image classifier were developed. The obtained classifier was tested on the Cifar10 dataset to evaluate its resilience over an interval of 200 mini-batches with a training and test size of mini-batch equals to 128 examples for such perturbations: adversarial black-box L∞-attacks with perturbation levels equal to 1, 3, 5 and 10; inversion of one randomly selected bit in a tensor for 10%, 30%, 50% and 60% randomly selected tensors; addition of one new class; real concept drift between a pair of classes. The effect of the feature space dimensionality on the value of the information criterion of the model performance without perturbations and the value of the integral metric of resilience during the exposure to perturbations is considered. \u0000Conclusions. The proposed model architecture and learning algorithm provide absorption of part of the disturbing influence, graceful degradation due to hierarchical classes and adaptive computation, and fast adaptation on a limited amount of labeled data. It is shown that adaptive computation saves up to 40% of resources due to early decision-making in the lower sections of the model, but perturbing influence leads to slowing down, which can be considered as graceful degradation. A multi-section structure trained using knowledge self-distillation principles has been shown to provide more than 5% improvement in the value of the integral mectric of resilience compared to an architecture where the decision is made on the last layer of the model. It is observed that the dimensionality of the feature space noticeably affects the resilience to adversarial attacks and can be chosen as a tradeoff between resilience to perturbations and efficiency without perturbations.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"12 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76467661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.15588/1607-3274-2022-3-5
A. Shved, Yevhen Davydenko
Context. Fortunately, the most commonly used in parametric statistics assumptions such as such as normality, linearity, independence, are not always fulfilled in real practice. The main reason for this is the appearance of observations in data samples that differ from the bulk of the data, as a result of which the sample becomes heterogeneous. The application in such conditions of generally accepted estimation procedures, for example, the sample mean, entails the bias increasing and the effectiveness decreasing of the estimates obtained. This, in turn, raises the problem of finding possible solutions to the problem of processing data sets that include outliers, especially in small samples. The object of the study is the process of detecting and excluding anomalous objects from the heterogeneous data sets. Objective. The goal of the work is to develop a procedure for anomaly detection in heterogeneous data sets, and the rationale for using a number of trimmed-mean robust estimators as a statistical measure of the location parameter of distorted parametric distribution models. Method. The problems of analysis (processing) of heterogeneous data containing outliers, sharply distinguished, suspicious observations are considered. The possibilities of using robust estimation methods for processing heterogeneous data have been analyzed. A procedure for identification and extraction of outliers caused by measurement errors, hidden equipment defects, experimental conditions, etc. has been proposed. The proposed approach is based on the procedure of symmetric and asymmetric truncation of the ranked set obtained from the initial sample of measurement data, based on the methods of robust statistics. For a reasonable choice of the value of the truncation coefficient, it is proposed to use adaptive robust procedures. Observations that fell into the zone of smallest and lowest ordinal statistics are considered outliers. Results. The proposed approach allows, in contrast to the traditional criteria for identifying outlying observations, such as the Smirnov (Grubbs) criterion, the Dixon criterion, etc., to split the analyzed set of data into a homogeneous component and identify the set of outlying observations, assuming that their share in the total set of analyzed data is unknown. Conclusions. The article proposes the use of robust statistics methods for the formation of supposed zones containing homogeneous and outlying observations in the ranked set, built on the basis of the initial sample of the analyzed data. It is proposed to use a complex of adaptive robust procedures to establish the expected truncation levels that form the zones of outlying observations in the region of the lowest and smallest order statistics of the ranked dataset. The final level of truncation of the ranked dataset is refined on the basis of existing criteria that allow checking the boundary observations (minimum and maximum) for outliers.
{"title":"OUTLIER DETECTION TECHNIQUE FOR HETEROGENEOUS DATA USING TRIMMED-MEAN ROBUST ESTIMATORS","authors":"A. Shved, Yevhen Davydenko","doi":"10.15588/1607-3274-2022-3-5","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-5","url":null,"abstract":"Context. Fortunately, the most commonly used in parametric statistics assumptions such as such as normality, linearity, independence, are not always fulfilled in real practice. The main reason for this is the appearance of observations in data samples that differ from the bulk of the data, as a result of which the sample becomes heterogeneous. The application in such conditions of generally accepted estimation procedures, for example, the sample mean, entails the bias increasing and the effectiveness decreasing of the estimates obtained. This, in turn, raises the problem of finding possible solutions to the problem of processing data sets that include outliers, especially in small samples. The object of the study is the process of detecting and excluding anomalous objects from the heterogeneous data sets. \u0000Objective. The goal of the work is to develop a procedure for anomaly detection in heterogeneous data sets, and the rationale for using a number of trimmed-mean robust estimators as a statistical measure of the location parameter of distorted parametric distribution models. \u0000Method. The problems of analysis (processing) of heterogeneous data containing outliers, sharply distinguished, suspicious observations are considered. The possibilities of using robust estimation methods for processing heterogeneous data have been analyzed. A procedure for identification and extraction of outliers caused by measurement errors, hidden equipment defects, experimental conditions, etc. has been proposed. The proposed approach is based on the procedure of symmetric and asymmetric truncation of the ranked set obtained from the initial sample of measurement data, based on the methods of robust statistics. For a reasonable choice of the value of the truncation coefficient, it is proposed to use adaptive robust procedures. Observations that fell into the zone of smallest and lowest ordinal statistics are considered outliers. \u0000Results. The proposed approach allows, in contrast to the traditional criteria for identifying outlying observations, such as the Smirnov (Grubbs) criterion, the Dixon criterion, etc., to split the analyzed set of data into a homogeneous component and identify the set of outlying observations, assuming that their share in the total set of analyzed data is unknown. \u0000Conclusions. The article proposes the use of robust statistics methods for the formation of supposed zones containing homogeneous and outlying observations in the ranked set, built on the basis of the initial sample of the analyzed data. It is proposed to use a complex of adaptive robust procedures to establish the expected truncation levels that form the zones of outlying observations in the region of the lowest and smallest order statistics of the ranked dataset. The final level of truncation of the ranked dataset is refined on the basis of existing criteria that allow checking the boundary observations (minimum and maximum) for outliers.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"35 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82524507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.15588/1607-3274-2022-3-3
V. Gorev, A. Gusev, V. Korniienko
Context. We investigate the Kolmogorov-Wiener filter weight function for the prediction of continuous stationary telecommunication traffic in the GFSD (Gaussian fractional sum-difference) model. Objective. The aim of the work is to obtain an approximate solution for the corresponding weight function and to illustrate the convergence of the truncated polynomial expansion method used in this paper. Method. The truncated polynomial expansion method is used for the obtaining of an approximate solution for the KolmogorovWiener weight function under consideration. In this paper we used the corresponding method on the basis of the Chebyshev polynomials of the first kind orthogonal on the time interval on which the filter input data are given. It is expected that the results based on other polynomial sets will be similar to the results obtained in this paper. Results. The weight function is investigated in the approximations up to the eighteen-polynomial one. It is shown that approximations of rather large numbers of polynomials lead to a good coincidence of the left-hand side and the right-hand side of the Wiener-Hopf integral equation. The quality of the coincidence is illustrated by the calculation of the corresponding MAPE errors. Conclusions. The paper is devoted to the theoretical construction of the Kolmogorov-Wiener filter for the prediction of continuous stationary telecommunication traffic in the GFSD model. The traffic correlation function in the framework of the GFSD model is a positively defined one, which guarantees the convergence of the truncated polynomial expansion method. The corresponding weight function is obtained in the approximations up to the eighteen-polynomial one. The convergence of the method is illustrated by the calculation of the MAPE errors of misalignment of the left-hand side and the right-hand side of the Wiener-Hopf integral equation under consideration. The results of the paper may be applied to practical traffic prediction in telecommunication systems with data packet transfer.
{"title":"KOLMOGOROV-WIENER FILTER FOR CONTINUOUS TRAFFIC PREDICTION IN THE GFSD MODEL","authors":"V. Gorev, A. Gusev, V. Korniienko","doi":"10.15588/1607-3274-2022-3-3","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-3","url":null,"abstract":"Context. We investigate the Kolmogorov-Wiener filter weight function for the prediction of continuous stationary telecommunication traffic in the GFSD (Gaussian fractional sum-difference) model. \u0000Objective. The aim of the work is to obtain an approximate solution for the corresponding weight function and to illustrate the convergence of the truncated polynomial expansion method used in this paper. \u0000Method. The truncated polynomial expansion method is used for the obtaining of an approximate solution for the KolmogorovWiener weight function under consideration. In this paper we used the corresponding method on the basis of the Chebyshev polynomials of the first kind orthogonal on the time interval on which the filter input data are given. It is expected that the results based on other polynomial sets will be similar to the results obtained in this paper. \u0000Results. The weight function is investigated in the approximations up to the eighteen-polynomial one. It is shown that approximations of rather large numbers of polynomials lead to a good coincidence of the left-hand side and the right-hand side of the Wiener-Hopf integral equation. The quality of the coincidence is illustrated by the calculation of the corresponding MAPE errors. \u0000Conclusions. The paper is devoted to the theoretical construction of the Kolmogorov-Wiener filter for the prediction of continuous stationary telecommunication traffic in the GFSD model. The traffic correlation function in the framework of the GFSD model is a positively defined one, which guarantees the convergence of the truncated polynomial expansion method. The corresponding weight function is obtained in the approximations up to the eighteen-polynomial one. The convergence of the method is illustrated by the calculation of the MAPE errors of misalignment of the left-hand side and the right-hand side of the Wiener-Hopf integral equation under consideration. The results of the paper may be applied to practical traffic prediction in telecommunication systems with data packet transfer.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"15 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82308514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.15588/1607-3274-2022-3-2
V. Galchenko, M. D. Koshevoy, R. Trembovetskaya
Context. The article is devoted to the creation of multifactorial experimental plans based on quasi-random recursive Roberts Rsequences. The object of the research is the process of creating computer-aided experimental design plans. The aim of the article is to create multifactorial, namely six- and seven-factor, uniform plans of experiments with low discrepancies, study of their projection properties and demonstrate their use on the example of surrogate modeling in eddy current structuroscopy. Method. An iterative method of unit hypercube even filling with reference points was used for constructing multidimensional experimental plans. It provides acceptable indicators of homogeneity and is realized on the basis of quasi-random nonparametric additive recursive Roberts R-sequences using irrational numbers, which, in turn, are obtained on the basis of the generalized Fibonacci sequence. The criterion for plans perfection is the assessment of homogeneity in terms of differences invariant with respect to the rotation of coordinates and re-marking and ordering of factors and which quantitatively characterize the deviation of the generated distribution from the ideal uniform. Results. Six- and seven-factor computer uniform experimental plans have been created for cataloging, which are characterized by low discrepancies and sufficiently high-quality projection properties. The tendency, which had been previously proved in the authors' research, for preserving these experimental plans characteristics in multidimensional factor spaces, which is observed with increasing number of plan points, has been confirmed. The evaluation of the quality of the created experimental plans is carried out both by visual analysis of the scattering matrix of all two-dimensional projections and by quantitative indicators of heterogeneity of the set of vectors that form the plan, namely centered and cyclic discrepancies. The example of the initial stage of creating a surrogate model to solve the problem of identifying profiles of electrophysical parameters in eddy current structuroscopy shows certain features of the application for created plans, in particular the transition from the plan for a unit hypercube to the plan in real factor space in the form of a hyperparallelepiped, which does not significantly affect its characteristics of homogeneity of the distribution of points. Conclusions. For the first time, the problem of creating six- and seven-factor uniform plans of experiments with low rates of centered and cyclic discrepancies based on R-sequences of Roberts was solved. The projection properties of the created experimental plans for different number of points were investigated. The method of constructing multidimensional computer plans of experiments taking into account the peculiarities of eddy current structuroscopy was improved. The use of six-dimensional experimental plans on the example of surrogate modeling in eddy current structuroscopy was demonstrated. Th
{"title":"HOMOGENEOUS PLANS OF MULTI-FACTORY EXPERIMENTS ON QUASI-RANDOM R-ROBERTS SEQUENCES FOR SURROGATE MODELING IN A VORTEX STYLE STRUCTUROSCOPY","authors":"V. Galchenko, M. D. Koshevoy, R. Trembovetskaya","doi":"10.15588/1607-3274-2022-3-2","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-2","url":null,"abstract":"Context. The article is devoted to the creation of multifactorial experimental plans based on quasi-random recursive Roberts Rsequences. The object of the research is the process of creating computer-aided experimental design plans. The aim of the article is to create multifactorial, namely six- and seven-factor, uniform plans of experiments with low discrepancies, study of their projection properties and demonstrate their use on the example of surrogate modeling in eddy current structuroscopy. \u0000Method. An iterative method of unit hypercube even filling with reference points was used for constructing multidimensional experimental plans. It provides acceptable indicators of homogeneity and is realized on the basis of quasi-random nonparametric additive recursive Roberts R-sequences using irrational numbers, which, in turn, are obtained on the basis of the generalized Fibonacci sequence. The criterion for plans perfection is the assessment of homogeneity in terms of differences invariant with respect to the rotation of coordinates and re-marking and ordering of factors and which quantitatively characterize the deviation of the generated distribution from the ideal uniform. \u0000Results. Six- and seven-factor computer uniform experimental plans have been created for cataloging, which are characterized by low discrepancies and sufficiently high-quality projection properties. The tendency, which had been previously proved in the authors' research, for preserving these experimental plans characteristics in multidimensional factor spaces, which is observed with increasing number of plan points, has been confirmed. The evaluation of the quality of the created experimental plans is carried out both by visual analysis of the scattering matrix of all two-dimensional projections and by quantitative indicators of heterogeneity of the set of vectors that form the plan, namely centered and cyclic discrepancies. \u0000The example of the initial stage of creating a surrogate model to solve the problem of identifying profiles of electrophysical parameters in eddy current structuroscopy shows certain features of the application for created plans, in particular the transition from the plan for a unit hypercube to the plan in real factor space in the form of a hyperparallelepiped, which does not significantly affect its characteristics of homogeneity of the distribution of points. \u0000Conclusions. For the first time, the problem of creating six- and seven-factor uniform plans of experiments with low rates of centered and cyclic discrepancies based on R-sequences of Roberts was solved. The projection properties of the created experimental plans for different number of points were investigated. The method of constructing multidimensional computer plans of experiments taking into account the peculiarities of eddy current structuroscopy was improved. The use of six-dimensional experimental plans on the example of surrogate modeling in eddy current structuroscopy was demonstrated. Th","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"18 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77989803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.15588/1607-3274-2022-3-1
R. Kharchenko, A. V. Kochetkov, V. Mikhaylenko
Context. DC voltage converters (DCV) are part of modern power supply systems (PSS) and power supply ensuring the operation of electronic and radio devices, telecommunication systems and communication and to a large extent determine their power consumption, reliability, time of readiness for operation, weight, size and cost indicators. Even though there are a large number of different software packages used in engineering practice for the study and design of radio engineering devices, such computer-aided design (CAD) systems and virtual computer simulation of electronic circuits have some limitations that do not allow to quickly carry out the entire complex of studies of DCV required for the analysis of electrical processes in various operating modes. Objective. In this section, the goal is to select the most suitable methods and algorithms that allow the development of software necessary for solving the problems of research and analysis of electrical processes for select energy parameters of the DCV of a modular structure in a separate power channel (PWC). Method. The paper proposes a method that consists in using mathematical models describing electrical processes in DC voltage converters and creating, on the basis of the developed calculation algorithms, specialized software for the automated study of electrical processes in the DCV of a modular structure using a computer. Results. The paper discusses the main methods of automated research of radio engineering devices, which can be used to analyze the electrical processes of pulsed DC voltage converters of a modular structure. Algorithms of calculation are given and, as an example, some results of automated research obtained using this method. Conclusions. The analysis of the known methods of automated research of DC voltage converters of modular structure is carried out. Their advantages and disadvantages are given. It is shown that the most suitable method is based on the use of mathematical models describing electrical processes in DC voltage converters of this type. On the basis of the mathematical models presented in the second section of the work, algorithms and specialized software have been developed that allow them to be widely used in the automated research and design of modular-structured DC voltage converters.
{"title":"ANALYSIS OF METHODS FOR AUTOMATED RESEARCH OF DC VOLTAGE CONVERTERS OF MODULAR STRUCTURE","authors":"R. Kharchenko, A. V. Kochetkov, V. Mikhaylenko","doi":"10.15588/1607-3274-2022-3-1","DOIUrl":"https://doi.org/10.15588/1607-3274-2022-3-1","url":null,"abstract":"Context. DC voltage converters (DCV) are part of modern power supply systems (PSS) and power supply ensuring the operation of electronic and radio devices, telecommunication systems and communication and to a large extent determine their power consumption, reliability, time of readiness for operation, weight, size and cost indicators. Even though there are a large number of different software packages used in engineering practice for the study and design of radio engineering devices, such computer-aided design (CAD) systems and virtual computer simulation of electronic circuits have some limitations that do not allow to quickly carry out the entire complex of studies of DCV required for the analysis of electrical processes in various operating modes. \u0000Objective. In this section, the goal is to select the most suitable methods and algorithms that allow the development of software necessary for solving the problems of research and analysis of electrical processes for select energy parameters of the DCV of a modular structure in a separate power channel (PWC). \u0000Method. The paper proposes a method that consists in using mathematical models describing electrical processes in DC voltage converters and creating, on the basis of the developed calculation algorithms, specialized software for the automated study of electrical processes in the DCV of a modular structure using a computer. \u0000Results. The paper discusses the main methods of automated research of radio engineering devices, which can be used to analyze the electrical processes of pulsed DC voltage converters of a modular structure. Algorithms of calculation are given and, as an example, some results of automated research obtained using this method. \u0000Conclusions. The analysis of the known methods of automated research of DC voltage converters of modular structure is carried out. Their advantages and disadvantages are given. It is shown that the most suitable method is based on the use of mathematical models describing electrical processes in DC voltage converters of this type. On the basis of the mathematical models presented in the second section of the work, algorithms and specialized software have been developed that allow them to be widely used in the automated research and design of modular-structured DC voltage converters.","PeriodicalId":43783,"journal":{"name":"Radio Electronics Computer Science Control","volume":"33 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78063223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}