The subject of study in this article is the technologies of Field Programmable Gate Array (FPGA), methods, and tools for prototyping of hardware accelerators of Artificial Intelligence (AI) and providing it as a service. The goal is to reduce the efforts of creation and modification of FPGA implementation of Artificial Intelligent projects and provide such solutions as a service. Task: to analyze the possibilities of heterogeneous computing for the implementation of AI projects; analyze advanced FPGA technologies and accelerator cards that allow the organization of a service; analyze the languages, frameworks, and integrated environments for the creation of Artificial Intelligence projects for FPGA implementation; propose a technique for modifiable FPGA project prototyping to ensure a long period of compatibility with integrated environments and target devices; propose a technique for the prototyping of FPGA services with high performance to improve the efficiency of FPGA based AI projects; propose a sequence of optimization of neural networks for FPGA implementation; and provide an example of the practical implementation of the research results. According to the tasks, the following results were obtained. Analysis of the biggest companies and vendors of FPGA technology is performed. Existing heterogeneous technologies and potential non-electronic mediums for AI computations are discussed. FPGA accelerator cards with a large amount of High Bandwidth Memory (HBM) on the same chip package for implementation of AI projects are analyzed and compared. Languages, frameworks, and technologies as well as the capabilities of libraries and integrated environments for prototyping of FPGA projects for the AI applications are analyzed in detail. The sequence of prototyping of FPGA projects that are stable to changes in the environment is proposed. The sequence of prototyping of highly efficient pipelined projects for data processing is proposed. The steps of optimization of neural networks for FPGA implementation of AI applications are provided. An example of practical use of the results of research, including the use of sequences is provided. Conclusions. One of the main contributions of this research is the proposed method of creation of FPGA based implementation of AI projects in the form of services. Proposed sequence of neural network optimization for FPGA allows the reduction of the complexity of the initial program model by more than five times for hardware implementation depending on the required accuracy. The described solutions allow the construction of completely scalable and modifiable FPGA implementations of AI projects to provide it as a service.
{"title":"Method of creation of FPGA based implementation of artificial intelligence as a service","authors":"Artem Perepelitsyn","doi":"10.32620/reks.2023.3.03","DOIUrl":"https://doi.org/10.32620/reks.2023.3.03","url":null,"abstract":"The subject of study in this article is the technologies of Field Programmable Gate Array (FPGA), methods, and tools for prototyping of hardware accelerators of Artificial Intelligence (AI) and providing it as a service. The goal is to reduce the efforts of creation and modification of FPGA implementation of Artificial Intelligent projects and provide such solutions as a service. Task: to analyze the possibilities of heterogeneous computing for the implementation of AI projects; analyze advanced FPGA technologies and accelerator cards that allow the organization of a service; analyze the languages, frameworks, and integrated environments for the creation of Artificial Intelligence projects for FPGA implementation; propose a technique for modifiable FPGA project prototyping to ensure a long period of compatibility with integrated environments and target devices; propose a technique for the prototyping of FPGA services with high performance to improve the efficiency of FPGA based AI projects; propose a sequence of optimization of neural networks for FPGA implementation; and provide an example of the practical implementation of the research results. According to the tasks, the following results were obtained. Analysis of the biggest companies and vendors of FPGA technology is performed. Existing heterogeneous technologies and potential non-electronic mediums for AI computations are discussed. FPGA accelerator cards with a large amount of High Bandwidth Memory (HBM) on the same chip package for implementation of AI projects are analyzed and compared. Languages, frameworks, and technologies as well as the capabilities of libraries and integrated environments for prototyping of FPGA projects for the AI applications are analyzed in detail. The sequence of prototyping of FPGA projects that are stable to changes in the environment is proposed. The sequence of prototyping of highly efficient pipelined projects for data processing is proposed. The steps of optimization of neural networks for FPGA implementation of AI applications are provided. An example of practical use of the results of research, including the use of sequences is provided. Conclusions. One of the main contributions of this research is the proposed method of creation of FPGA based implementation of AI projects in the form of services. Proposed sequence of neural network optimization for FPGA allows the reduction of the complexity of the initial program model by more than five times for hardware implementation depending on the required accuracy. The described solutions allow the construction of completely scalable and modifiable FPGA implementations of AI projects to provide it as a service.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melinda Melinda, Filbert H. Juwono, I Ketut Agung Enriko, Maulisa Oktiana, Siti Mulyani, Khairun Saddami
The article’s subject matter is to classify Electroencephalography (EEG) signals in Autism Spectrum Disorder (ASD) sufferers. The goal is to develop a classification model using Machine Learning (ML) algorithms that are often implemented in Brain-Computer Interfaces (BCI) technology. The tasks to be solved are as follows: pre-processing the EEG dataset signal to separate the source signal from the noise/artifact signal to produce an observation signal that is free of noise/artifact; obtaining an effective feature comparison to be used as an attribute at the classification stage; and developing a more optimal classification method for detecting people with ASD through EEG signals. The methods used are: one of the wavelet techniques, namely the Continuous Wavelet Transform (CWT), which is a technique for decomposing time-frequency signals. CWT began to be used in EEG signals because it can describe signals in great detail in the time-frequency domain. EEG signals are classified into two scenarios: classification of CWT coefficients and classification of statistical features (mean, standard deviation, skewness, and kurtosis) of CWT. The method used for classifying this research uses ML, which is currently very developed in signal processing. One of the best ML methods is Support Vector Machine (SVM). SVM is an effective super-vised learning method to separate data into different classes by finding the hyper-plane with the largest margin among the observed data. The following results were obtained: the application of CWT and SVM resulted in the best classification based on CWT coefficients and obtained an accuracy of 95% higher than the statistical feature-based classification of CWT, which obtained an accuracy of 65%. Conclusions. The scientific contributions of the results obtained are as follows: 1) EEG signal processing is performed in ASD children using feature extraction with CWT and classification with SVM; 2) the combination of these signal classification methods can improve system performance in ASD EEG signal classification; 3) the implementation of this research can later assist in detecting ASD EEG signals based on brain wave characteristics.
{"title":"Application of continuous wavelet transform and support vector machine for autism spectrum disorder electroencephalography signal classification","authors":"Melinda Melinda, Filbert H. Juwono, I Ketut Agung Enriko, Maulisa Oktiana, Siti Mulyani, Khairun Saddami","doi":"10.32620/reks.2023.3.07","DOIUrl":"https://doi.org/10.32620/reks.2023.3.07","url":null,"abstract":"The article’s subject matter is to classify Electroencephalography (EEG) signals in Autism Spectrum Disorder (ASD) sufferers. The goal is to develop a classification model using Machine Learning (ML) algorithms that are often implemented in Brain-Computer Interfaces (BCI) technology. The tasks to be solved are as follows: pre-processing the EEG dataset signal to separate the source signal from the noise/artifact signal to produce an observation signal that is free of noise/artifact; obtaining an effective feature comparison to be used as an attribute at the classification stage; and developing a more optimal classification method for detecting people with ASD through EEG signals. The methods used are: one of the wavelet techniques, namely the Continuous Wavelet Transform (CWT), which is a technique for decomposing time-frequency signals. CWT began to be used in EEG signals because it can describe signals in great detail in the time-frequency domain. EEG signals are classified into two scenarios: classification of CWT coefficients and classification of statistical features (mean, standard deviation, skewness, and kurtosis) of CWT. The method used for classifying this research uses ML, which is currently very developed in signal processing. One of the best ML methods is Support Vector Machine (SVM). SVM is an effective super-vised learning method to separate data into different classes by finding the hyper-plane with the largest margin among the observed data. The following results were obtained: the application of CWT and SVM resulted in the best classification based on CWT coefficients and obtained an accuracy of 95% higher than the statistical feature-based classification of CWT, which obtained an accuracy of 65%. Conclusions. The scientific contributions of the results obtained are as follows: 1) EEG signal processing is performed in ASD children using feature extraction with CWT and classification with SVM; 2) the combination of these signal classification methods can improve system performance in ASD EEG signal classification; 3) the implementation of this research can later assist in detecting ASD EEG signals based on brain wave characteristics.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
File carving techniques are important in the field of digital forensics. At the same time, the rapid growth in the amount and types of data requires the development of file carving methods in terms of capabilities, accuracy, and computational efficiency. However, most of the methods are developed to solve specific tasks and are based on a certain set of assumptions and a priori knowledge about the files to be recovered. There is a lack of research that systematizes methods and structures approaches to identify gaps and determine perspective directions for development, considering the latest advances in information technology and artificial intelligence. The subject matter of this article is the structure, factors, efficiency criteria, methods, and tools of file carving, as well as the current state and tendencies of development of file carving methods. The goal of this study is to systematize knowledge about advanced file carving methods and identify perspective directions for their development. The tasks to be solved are as follows: to identify the main stages of file carving and analyze approaches to their implementation; to build an ontological scheme of file carving; and to identify perspective directions for the development of carving methods. The methods used were literature review, systematization, and summarization. The obtained results are as follows. An ontological scheme for the file carving concept is constructed. The scheme includes the principles, properties, phases, techniques, evaluation criteria, tools used, and factors influencing file carving. The features, limitations, and fields of application of the data recovery methods are provided. It was established that the most widespread approach to file reconstruction is still a manually detailed analysis of the internal structure of files and/or their contents, identifying specific patterns that allow reassembling the sequence of data fragments in the correct order. However, most of the methods do not provide one hundred percent guaranteed results. This article analyzes the current state and prospects of using artificial intelligence methods in the field of digital forensics, particularly for identifying data blocks, clustering, and reconstructing files, as well as restoring the contents of media files with damaged or lost headers. The necessity of having priori information about the file structure or content for successfully carving fragmented data is determined. Conclusions. The scientific novelty of the obtained results is as follows: for the first time, advanced file carving methods are systematized and analyzed by directions of development and the perspectives of using artificial intelligence for identifying data blocks, clustering, and file content restoration; for the first time, an ontological scheme of file carving is constructed, which can be used as a roadmap for developing new advanced systems in the digital forensics field.
{"title":"Advanced file carving: ontology, models and methods","authors":"Maksym Boiko, Viacheslav Moskalenko, Oksana Shovkoplias","doi":"10.32620/reks.2023.3.16","DOIUrl":"https://doi.org/10.32620/reks.2023.3.16","url":null,"abstract":"File carving techniques are important in the field of digital forensics. At the same time, the rapid growth in the amount and types of data requires the development of file carving methods in terms of capabilities, accuracy, and computational efficiency. However, most of the methods are developed to solve specific tasks and are based on a certain set of assumptions and a priori knowledge about the files to be recovered. There is a lack of research that systematizes methods and structures approaches to identify gaps and determine perspective directions for development, considering the latest advances in information technology and artificial intelligence. The subject matter of this article is the structure, factors, efficiency criteria, methods, and tools of file carving, as well as the current state and tendencies of development of file carving methods. The goal of this study is to systematize knowledge about advanced file carving methods and identify perspective directions for their development. The tasks to be solved are as follows: to identify the main stages of file carving and analyze approaches to their implementation; to build an ontological scheme of file carving; and to identify perspective directions for the development of carving methods. The methods used were literature review, systematization, and summarization. The obtained results are as follows. An ontological scheme for the file carving concept is constructed. The scheme includes the principles, properties, phases, techniques, evaluation criteria, tools used, and factors influencing file carving. The features, limitations, and fields of application of the data recovery methods are provided. It was established that the most widespread approach to file reconstruction is still a manually detailed analysis of the internal structure of files and/or their contents, identifying specific patterns that allow reassembling the sequence of data fragments in the correct order. However, most of the methods do not provide one hundred percent guaranteed results. This article analyzes the current state and prospects of using artificial intelligence methods in the field of digital forensics, particularly for identifying data blocks, clustering, and reconstructing files, as well as restoring the contents of media files with damaged or lost headers. The necessity of having priori information about the file structure or content for successfully carving fragmented data is determined. Conclusions. The scientific novelty of the obtained results is as follows: for the first time, advanced file carving methods are systematized and analyzed by directions of development and the perspectives of using artificial intelligence for identifying data blocks, clustering, and file content restoration; for the first time, an ontological scheme of file carving is constructed, which can be used as a roadmap for developing new advanced systems in the digital forensics field.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alina Yanko, Viktor Krasnobayev, Anatolii Martynenko
The concept of increasing the fault tolerance of a computer system (CS) by using the existing natural redundancy, which depends on the number of systems used, is considered. The subject of this article is the methods and means of increasing the fault tolerance of CS and components based on the use of a non-positional number system in residual classes. It is shown that the use of the system of residual classes (SRC) as a number system ensures the fault-tolerant functioning of the real-time CS. This study considers a fault-tolerant CS operating in the SRC. The aim of this research is to show the influence of the non-positional number system in the SRC on the possibility of organizing the fault-tolerant functioning of a computer system. The object of this research is the process of fault-tolerant functioning of the CS in the SRC. This article provides an example of the operation of a fault-tolerant CS in the SRC given by a set of specific bases. The fault tolerance of CS in the SRC is ensured by the use of the basic qualities of the SRC by the method of active fault tolerance by using the procedure of gradual degradation. The level of fault tolerance of CS in the SRC in the example given in the article is achieved by reducing the accuracy of the calculations. This article considers two levels of degradation. Variants of algorithms for operating fault-tolerant CS in the SRC in the modes of replacement and gradual degradation are presented. Methods of system analysis, number theory, theory of computing processes and systems, and coding theory in the SRC were the basis of the conducted research. The results of the analysis of the specific example of the functioning of CS in the SRC given in the article, specified by four information and one control bases, showed the effectiveness of using non-positional code structures to ensure fault-tolerant operation. Conclusions. This article discusses the concept of increasing fault tolerance based on the use of the existing primary redundancy contained in the CS, due to the use of the basic properties of the non-positional number system in residual classes.
{"title":"Influence of the number system in residual classes on the fault tolerance of the computer system","authors":"Alina Yanko, Viktor Krasnobayev, Anatolii Martynenko","doi":"10.32620/reks.2023.3.13","DOIUrl":"https://doi.org/10.32620/reks.2023.3.13","url":null,"abstract":"The concept of increasing the fault tolerance of a computer system (CS) by using the existing natural redundancy, which depends on the number of systems used, is considered. The subject of this article is the methods and means of increasing the fault tolerance of CS and components based on the use of a non-positional number system in residual classes. It is shown that the use of the system of residual classes (SRC) as a number system ensures the fault-tolerant functioning of the real-time CS. This study considers a fault-tolerant CS operating in the SRC. The aim of this research is to show the influence of the non-positional number system in the SRC on the possibility of organizing the fault-tolerant functioning of a computer system. The object of this research is the process of fault-tolerant functioning of the CS in the SRC. This article provides an example of the operation of a fault-tolerant CS in the SRC given by a set of specific bases. The fault tolerance of CS in the SRC is ensured by the use of the basic qualities of the SRC by the method of active fault tolerance by using the procedure of gradual degradation. The level of fault tolerance of CS in the SRC in the example given in the article is achieved by reducing the accuracy of the calculations. This article considers two levels of degradation. Variants of algorithms for operating fault-tolerant CS in the SRC in the modes of replacement and gradual degradation are presented. Methods of system analysis, number theory, theory of computing processes and systems, and coding theory in the SRC were the basis of the conducted research. The results of the analysis of the specific example of the functioning of CS in the SRC given in the article, specified by four information and one control bases, showed the effectiveness of using non-positional code structures to ensure fault-tolerant operation. Conclusions. This article discusses the concept of increasing fault tolerance based on the use of the existing primary redundancy contained in the CS, due to the use of the basic properties of the non-positional number system in residual classes.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the multi-criteria task of optimizing the operating modes of cathodic protection stations (CPS), considering monitoring data, geological conditions at the pipeline installation site, climatic or seasonal changes, and other factors. The relevance of this research is associated with a comprehensive solution to the problem of increasing the durability and reliability of trunk pipelines to reduce accidents at their facilities by ensuring the efficiency of electrochemical protection (EChP) systems. The problems of existing EChP systems are analyzed, where the elimination of anode zones ("lack of protection") due to cathodic polarization is carried out without operational consideration of environmental conditions, as a rule, with a margin in terms of protective potential, which often leads to "overprotection", resulting in increased power consumption, gas formation on the metal surface, and detachment and wear insulation of pipelines. The aim of this research is to create a method for optimal regulation of the operation modes of the main pipelines and an adaptive electrochemical protection system that provides control and parameter management of cathodic protection stations, considering changes in external conditions on individual linear sections of main pipelines. Tasks: to develop an adjustment method for finding the effect of the CPS on the value of potentials at control points along the pipeline route; to develop a multicriteria optimization model for regulating the operation modes of the CPS; and to provide an example of testing the method of optimal regulation on the objects of the linear part of the existing main gas pipeline. The following results were obtained. A method is proposed for determining the effect of CPS operating modes on the value of potentials at control points in the mode of interrupting the protection current of other stations. An optimization model was formed according to the criterion of uniformity of the distribution of the protective potential "pipe-ground" along the pipeline route and according to the criterion of the minimum total protective current of all CPSs on a given section of the main pipeline. Conclusions. The scientific novelty of the results obtained is associated with the development of an original optimization method that allows scientifically determining the operation modes of the CPS to ensure the protection of the main pipeline both in time and length with reduced operating costs and adaptability to changes in climatic, seasonal, and geological conditions at the pipeline installation site. The effectiveness of the proposed approach is illustrated by the regulation of the parameters of the CPS based on the monitoring data of the section of the main gas pipeline of the oil and gas complex of the Republic of Kazakhstan.
{"title":"Optimization of the cathodic protection system for the main pipelines","authors":"Oleksandr Prokhorov, Valeriy Prokhorov, Alisher Khussanov, Zhakhongir Khussanov, Botagoz Kaldybayeva, Dilfuza Turdybekova","doi":"10.32620/reks.2023.3.15","DOIUrl":"https://doi.org/10.32620/reks.2023.3.15","url":null,"abstract":"This study investigated the multi-criteria task of optimizing the operating modes of cathodic protection stations (CPS), considering monitoring data, geological conditions at the pipeline installation site, climatic or seasonal changes, and other factors. The relevance of this research is associated with a comprehensive solution to the problem of increasing the durability and reliability of trunk pipelines to reduce accidents at their facilities by ensuring the efficiency of electrochemical protection (EChP) systems. The problems of existing EChP systems are analyzed, where the elimination of anode zones (\"lack of protection\") due to cathodic polarization is carried out without operational consideration of environmental conditions, as a rule, with a margin in terms of protective potential, which often leads to \"overprotection\", resulting in increased power consumption, gas formation on the metal surface, and detachment and wear insulation of pipelines. The aim of this research is to create a method for optimal regulation of the operation modes of the main pipelines and an adaptive electrochemical protection system that provides control and parameter management of cathodic protection stations, considering changes in external conditions on individual linear sections of main pipelines. Tasks: to develop an adjustment method for finding the effect of the CPS on the value of potentials at control points along the pipeline route; to develop a multicriteria optimization model for regulating the operation modes of the CPS; and to provide an example of testing the method of optimal regulation on the objects of the linear part of the existing main gas pipeline. The following results were obtained. A method is proposed for determining the effect of CPS operating modes on the value of potentials at control points in the mode of interrupting the protection current of other stations. An optimization model was formed according to the criterion of uniformity of the distribution of the protective potential \"pipe-ground\" along the pipeline route and according to the criterion of the minimum total protective current of all CPSs on a given section of the main pipeline. Conclusions. The scientific novelty of the results obtained is associated with the development of an original optimization method that allows scientifically determining the operation modes of the CPS to ensure the protection of the main pipeline both in time and length with reduced operating costs and adaptability to changes in climatic, seasonal, and geological conditions at the pipeline installation site. The effectiveness of the proposed approach is illustrated by the regulation of the parameters of the CPS based on the monitoring data of the section of the main gas pipeline of the oil and gas complex of the Republic of Kazakhstan.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, there is a contradiction between the rapid increase in the complexity and size of modern software while increasing responsibility for the performance of their functions, the increasing requirements of customers and users to the quality and efficiency of software use and the imperfection of models, methods, tools of predicting software quality at the early stages of the life cycle.Therefore, the task of predicting the software quality level based on requirements is relevant. The aim of this study is to solve this task by developing information technology for prediction software quality levels based on requirements. The proposed information technology for prediction software quality level based on requirements provides analysis of quality attributes in requirements, reflects the dependence (equations) of quality characteristics on attributes, forms a quantitative assessment of quality characteristics, reflects the dependence (equation) of quality on its characteristics, forms a quantitative assessment of quality, performs quality level prediction, provides all the listed services simultaneously and the model, methods, and tools underlying the IT belong to common methodological approaches and are integrated. The developed system of predicting the software quality level based on requirements provides the user with predicted estimates of eight software quality characteristics, geometric interpretation of the software quality characteristics’ values, a comprehensive indicator of the predicted software quality, and a conclusion about the future software quality level. On the basis of this, it is possible to compare sets of requirements for software and make a reasonable choice of a set of requirements for further implementation. The information technology and the system of predicting the software quality level based on requirements, developed in this paper, provide the possibility of comparing sets of requirements for software, justified selection of requirements for further implementation of quality software (as experiments have shown, this is only one of the four proposed sets), and rejection or revision of unsuccessful sets of requirements that cannot be used to develop quality software.
{"title":"Information technology for prediction of software quality level","authors":"Tetiana Hovorushchenko, Yurii Voichur, Dmytro Medzatyi","doi":"10.32620/reks.2023.3.19","DOIUrl":"https://doi.org/10.32620/reks.2023.3.19","url":null,"abstract":"Today, there is a contradiction between the rapid increase in the complexity and size of modern software while increasing responsibility for the performance of their functions, the increasing requirements of customers and users to the quality and efficiency of software use and the imperfection of models, methods, tools of predicting software quality at the early stages of the life cycle.Therefore, the task of predicting the software quality level based on requirements is relevant. The aim of this study is to solve this task by developing information technology for prediction software quality levels based on requirements. The proposed information technology for prediction software quality level based on requirements provides analysis of quality attributes in requirements, reflects the dependence (equations) of quality characteristics on attributes, forms a quantitative assessment of quality characteristics, reflects the dependence (equation) of quality on its characteristics, forms a quantitative assessment of quality, performs quality level prediction, provides all the listed services simultaneously and the model, methods, and tools underlying the IT belong to common methodological approaches and are integrated. The developed system of predicting the software quality level based on requirements provides the user with predicted estimates of eight software quality characteristics, geometric interpretation of the software quality characteristics’ values, a comprehensive indicator of the predicted software quality, and a conclusion about the future software quality level. On the basis of this, it is possible to compare sets of requirements for software and make a reasonable choice of a set of requirements for further implementation. The information technology and the system of predicting the software quality level based on requirements, developed in this paper, provide the possibility of comparing sets of requirements for software, justified selection of requirements for further implementation of quality software (as experiments have shown, this is only one of the four proposed sets), and rejection or revision of unsuccessful sets of requirements that cannot be used to develop quality software.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed Lahjouji El Idrissi, Ismail Ezzerrifi Amrani, Adil Ben-Hdech, Ahmad El Allaoui
This article is dedicated to the efficient resolution of the fixed charge transport problem (FCTP) with the goal of identifying optimal solutions within reduced timeframes. FCTP is a combinatorial and NP-complete problem known for its exponential time complexity relative to problem size. Metaheuristic methods, including genetic algorithms, represent effective techniques for obtaining high-quality FCTP solutions. Consequently, the integration of parallel algorithms emerges as a strategy for expediting problem-solving. The proposed approach, referred to as the parallel genetic algorithm (PGA), entails the application of a genetic algorithm across multiple parallel architectures to tackle the FCTP problem. The primary aim is to explore fresh solutions for the fixed charge transportation problem using genetic algorithms while concurrently optimizing the time required to achieve these solutions through parallelism. The FCTP problem is fundamentally a linear programming challenge, revolving around the determination of optimal shipment quantities from numerous source locations to multiple destinations with the overarching objective of minimizing overall transportation costs. This necessitates consideration of constraints tied to product availability at the sources and demand dynamics at the destinations. In this study, a pioneering approach to addressing the Fixed Charge Transportation Problem (FCTP) using parallel genetic algorithms (PGA) is unveiled. The research introduces two distinct parallel algorithms: The Master-Slave Approach (MS-GA) and the Coarse-Grained Approach (CG-GA). Additionally, investigation into the hybridization of these approaches has led to the development of the NMS-CG-GA approach. The numerical results reveal that our parallelism-based approaches significantly improve the performance of genetic algorithms. Specifically, the Master-Slave (MS-GA) approach demonstrates its advantages in solving smaller instances of the FCTP problem, while the Coarse-Grained (CG-GA) approach exhibits greater effectiveness for larger problem instances. The conclusion reached is that the novel hybrid parallel genetic algorithm approach (NMS-CG-GA) outperforms its predecessors, yielding outstanding results, particularly across diverse FCTP problem instances.
{"title":"A novel approach and hybrid parallel algorithms for solving the fixed charge transportation problem","authors":"Ahmed Lahjouji El Idrissi, Ismail Ezzerrifi Amrani, Adil Ben-Hdech, Ahmad El Allaoui","doi":"10.32620/reks.2023.3.02","DOIUrl":"https://doi.org/10.32620/reks.2023.3.02","url":null,"abstract":"This article is dedicated to the efficient resolution of the fixed charge transport problem (FCTP) with the goal of identifying optimal solutions within reduced timeframes. FCTP is a combinatorial and NP-complete problem known for its exponential time complexity relative to problem size. Metaheuristic methods, including genetic algorithms, represent effective techniques for obtaining high-quality FCTP solutions. Consequently, the integration of parallel algorithms emerges as a strategy for expediting problem-solving. The proposed approach, referred to as the parallel genetic algorithm (PGA), entails the application of a genetic algorithm across multiple parallel architectures to tackle the FCTP problem. The primary aim is to explore fresh solutions for the fixed charge transportation problem using genetic algorithms while concurrently optimizing the time required to achieve these solutions through parallelism. The FCTP problem is fundamentally a linear programming challenge, revolving around the determination of optimal shipment quantities from numerous source locations to multiple destinations with the overarching objective of minimizing overall transportation costs. This necessitates consideration of constraints tied to product availability at the sources and demand dynamics at the destinations. In this study, a pioneering approach to addressing the Fixed Charge Transportation Problem (FCTP) using parallel genetic algorithms (PGA) is unveiled. The research introduces two distinct parallel algorithms: The Master-Slave Approach (MS-GA) and the Coarse-Grained Approach (CG-GA). Additionally, investigation into the hybridization of these approaches has led to the development of the NMS-CG-GA approach. The numerical results reveal that our parallelism-based approaches significantly improve the performance of genetic algorithms. Specifically, the Master-Slave (MS-GA) approach demonstrates its advantages in solving smaller instances of the FCTP problem, while the Coarse-Grained (CG-GA) approach exhibits greater effectiveness for larger problem instances. The conclusion reached is that the novel hybrid parallel genetic algorithm approach (NMS-CG-GA) outperforms its predecessors, yielding outstanding results, particularly across diverse FCTP problem instances.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject matter of this article is the processes of modeling the function of attentiveness of users of critical applications on the basis of recognition of biometric parameters by elements of artificial intelligence. The goal is the development and software implementation of mechanisms for monitoring the work of employees of responsible professions, which, based on the analysis of information from a webcam online, monitor the presence of the employee's focus on the active zone of the critical application and the absence of unauthorized persons near the computer. The tasks to be solved are as follows: to determine a list of factors, the presence of which must be constantly checked to control the focus of the employee's attention on the active zone of the critical application and the absence of unauthorized persons near the computer; to choose the optimal technology for reading and primary processing of information from webcams online, for further use in solving the task; to develop mechanisms for monitoring certain factors, the presence of which must be constantly checked to control the presence of the employee's focus on the active zone of the critical application and the absence of unauthorized persons near the computer; and to programmatically implement the developed mechanisms using the object-oriented programming language Python. The methods used were artificial neural networks, 3D facial modeling, and landmark mapping. The following results were obtained. A list of factors has been identified, the presence of which must be constantly checked to monitor the presence of employee's attention in the active zone of critical use and the absence of unauthorized persons near the computer. On the basis of the analysis of modern technologies for reading and primary processing of information from online webcams, technologies implemented in the MediaPipe library were selected for further use in solving the problem. Mechanisms have been developed for monitoring certain factors, the presence of which must be constantly checked to monitor the presence of employees in the active zone of critical use and the absence of unauthorized persons near the computer. The object-oriented programming language Python is software-implemented using the MediaPipe library, mechanisms are developed and, based on the results of the experiments, the expediency of its use for solving the problem is proved. Conclusions. The scientific novelty of the obtained results is as follows: we have formed a list of factors, the presence of which must be constantly checked to monitor the presence of employee's attention in the active zone of critical application, the absence of unauthorized persons near the computer, and improved facial recognition technologies, which allows us to obtain a solution to the problem of monitoring the attention of users of critical applications in non-ideal conditions.
{"title":"Modeling the mindfulness people's function based on the recognition of biometric parameters by artificial intelligence elements","authors":"Olena Vysotska, Anatolii Davydenko, Oleksandr Potenko","doi":"10.32620/reks.2023.3.11","DOIUrl":"https://doi.org/10.32620/reks.2023.3.11","url":null,"abstract":"The subject matter of this article is the processes of modeling the function of attentiveness of users of critical applications on the basis of recognition of biometric parameters by elements of artificial intelligence. The goal is the development and software implementation of mechanisms for monitoring the work of employees of responsible professions, which, based on the analysis of information from a webcam online, monitor the presence of the employee's focus on the active zone of the critical application and the absence of unauthorized persons near the computer. The tasks to be solved are as follows: to determine a list of factors, the presence of which must be constantly checked to control the focus of the employee's attention on the active zone of the critical application and the absence of unauthorized persons near the computer; to choose the optimal technology for reading and primary processing of information from webcams online, for further use in solving the task; to develop mechanisms for monitoring certain factors, the presence of which must be constantly checked to control the presence of the employee's focus on the active zone of the critical application and the absence of unauthorized persons near the computer; and to programmatically implement the developed mechanisms using the object-oriented programming language Python. The methods used were artificial neural networks, 3D facial modeling, and landmark mapping. The following results were obtained. A list of factors has been identified, the presence of which must be constantly checked to monitor the presence of employee's attention in the active zone of critical use and the absence of unauthorized persons near the computer. On the basis of the analysis of modern technologies for reading and primary processing of information from online webcams, technologies implemented in the MediaPipe library were selected for further use in solving the problem. Mechanisms have been developed for monitoring certain factors, the presence of which must be constantly checked to monitor the presence of employees in the active zone of critical use and the absence of unauthorized persons near the computer. The object-oriented programming language Python is software-implemented using the MediaPipe library, mechanisms are developed and, based on the results of the experiments, the expediency of its use for solving the problem is proved. Conclusions. The scientific novelty of the obtained results is as follows: we have formed a list of factors, the presence of which must be constantly checked to monitor the presence of employee's attention in the active zone of critical application, the absence of unauthorized persons near the computer, and improved facial recognition technologies, which allows us to obtain a solution to the problem of monitoring the attention of users of critical applications in non-ideal conditions.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject of this study is methods for improving the efficiency of semantic coding of speech signals. The purpose of this study is to develop a method for improving the efficiency of semantic coding of speech signals. Coding efficiency refers to the reduction of the information transmission rate with a given probability of error-free recognition of semantic features of speech signals, which will significantly reduce the required source bandwidth, thereby increasing the communication channel bandwidth. To achieve this goal, it is necessary to solve the following scientific tasks: (1) to investigate a known method for improving the efficiency of semantic coding of speech signals based on mel-frequency cepstral coefficients; (2) to substantiate the effectiveness of using the adaptive empirical wavelet transform in the tasks of multiple-scale analysis and semantic coding of speech signals; (3) to develop a method of semantic coding of speech signals based on adaptive empirical wavelet transform with further application of Hilbert spectral analysis and optimal thresholding; and (4) to perform an objective quantitative assessment of the increase in the efficiency of the developed method of semantic coding of speech signals in contrast to the existing method. The following scientific results were obtained during the study: a method of semantic coding of speech signals based on empirical wavelet transform is developed for the first time, which differs from existing methods by constructing a set of adaptive bandpass Meyer wavelet filters with further application of Hilbert spectral analysis to find the instantaneous amplitudes and frequencies of the functions of internal empirical modes, which will allow the identification of semantic features of speech signals and increase the efficiency of their coding; for the first time, it is proposed to use the method of adaptive empirical wavelet transform in the tasks of multiple-scale analysis and semantic coding of speech signals, which will increase the efficiency of spectral analysis by decomposing the high-frequency speech oscillation into its low-frequency components, namely internal empirical modes; the method of semantic coding of speech signals based on mel-frequency cepstral coefficients was further developed, but using the basic principles of adaptive spectral analysis with the help of empirical wavelet transform, which increases the efficiency of this method. Conclusions: We developed a method for semantic coding of speech signals based on empirical wavelet transform, which reduces the encoding rate from 320 to 192 bps and the required bandwidth from 40 to 24 Hz with a probability of error-free recognition of approximately 0.96 (96%) and a signal-to-noise ratio of 48 dB, according to which its efficiency is increased by 1.6 times as compared to the existing method. We developed an algorithm for semantic coding of speech signals based on empirical wavelet transform and its software implementation in t
{"title":"A method for extracting the semantic features of speech signal recognition based on empirical wavelet transform","authors":"Oleksandr Lavrynenko, Denis Bakhtiiarov, Vitaliy Kurushkin, Serhii Zavhorodnii, Veniamin Antonov, Petro Stanko","doi":"10.32620/reks.2023.3.09","DOIUrl":"https://doi.org/10.32620/reks.2023.3.09","url":null,"abstract":"The subject of this study is methods for improving the efficiency of semantic coding of speech signals. The purpose of this study is to develop a method for improving the efficiency of semantic coding of speech signals. Coding efficiency refers to the reduction of the information transmission rate with a given probability of error-free recognition of semantic features of speech signals, which will significantly reduce the required source bandwidth, thereby increasing the communication channel bandwidth. To achieve this goal, it is necessary to solve the following scientific tasks: (1) to investigate a known method for improving the efficiency of semantic coding of speech signals based on mel-frequency cepstral coefficients; (2) to substantiate the effectiveness of using the adaptive empirical wavelet transform in the tasks of multiple-scale analysis and semantic coding of speech signals; (3) to develop a method of semantic coding of speech signals based on adaptive empirical wavelet transform with further application of Hilbert spectral analysis and optimal thresholding; and (4) to perform an objective quantitative assessment of the increase in the efficiency of the developed method of semantic coding of speech signals in contrast to the existing method. The following scientific results were obtained during the study: a method of semantic coding of speech signals based on empirical wavelet transform is developed for the first time, which differs from existing methods by constructing a set of adaptive bandpass Meyer wavelet filters with further application of Hilbert spectral analysis to find the instantaneous amplitudes and frequencies of the functions of internal empirical modes, which will allow the identification of semantic features of speech signals and increase the efficiency of their coding; for the first time, it is proposed to use the method of adaptive empirical wavelet transform in the tasks of multiple-scale analysis and semantic coding of speech signals, which will increase the efficiency of spectral analysis by decomposing the high-frequency speech oscillation into its low-frequency components, namely internal empirical modes; the method of semantic coding of speech signals based on mel-frequency cepstral coefficients was further developed, but using the basic principles of adaptive spectral analysis with the help of empirical wavelet transform, which increases the efficiency of this method. Conclusions: We developed a method for semantic coding of speech signals based on empirical wavelet transform, which reduces the encoding rate from 320 to 192 bps and the required bandwidth from 40 to 24 Hz with a probability of error-free recognition of approximately 0.96 (96%) and a signal-to-noise ratio of 48 dB, according to which its efficiency is increased by 1.6 times as compared to the existing method. We developed an algorithm for semantic coding of speech signals based on empirical wavelet transform and its software implementation in t","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135296771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, emphasizing the need for predictive tools for resource allocation and patient care. This study delves into the potential of machine learning models to predict the risk levels of COVID-19 patients using a comprehensive dataset. This study aimed to evaluate and compare the efficacy of three distinct machine learning methodologies – Bayesian Criterion, Logistic Regression, and Gradient Boosting – in predicting the risk associated with COVID-19 patients based on their symptoms, status, and medical history. This research is targeted at the process of patient state determination. The research subjects are machine learning methods for patient state determination. To achieve the aim of the research, the following tasks have been formulated: methods and models of the COVID-19 patients state determination should be analyzed; classification model of the patient state determination based on Bayes criterion should be developed; classification model of the patient state determination based on logistic regression should be developed; classification model of the patient state determination based on gradient boosting should be developed; the information system should be developed; the experimental study based on machine learning methods should be provided; and the results of the experimental study should be analyzed. Methods: using a dataset provided by the Mexican government, which encompasses over a million unique patients with 21 distinct features, we developed an information system in C# programming language. This system allows users to select their preferred method for risk calculation, offering a real-time decision-making tool for healthcare professionals. Results: All models demonstrated commendable accuracy levels. However, subtle differences in their performance metrics, such as sensitivity, precision, and the F1-score, were observed. The Gradient Boosting method slightly outperformed the other models in terms of overall accuracy. Conclusions: While each model showcased its merits, the choice of method should be based on the specific needs and constraints of the healthcare system. The Gradient Boosting method emerged as marginally superior in this study. This research underscores the potential of machine learning in enhancing pandemic response strategies, offering both scientific insights and practical tools for healthcare professionals.
{"title":"Comparative analysis of the machine learning models determining COVID-19 patient risk levels","authors":"Kseniia Bazilevych, Olena Kyrylenko, Yurii Parfenyuk, Serhii Krivtsov, Ievgen Meniailov, Victoriya Kuznietcova, Dmytro Chumachenko","doi":"10.32620/reks.2023.3.01","DOIUrl":"https://doi.org/10.32620/reks.2023.3.01","url":null,"abstract":"The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, emphasizing the need for predictive tools for resource allocation and patient care. This study delves into the potential of machine learning models to predict the risk levels of COVID-19 patients using a comprehensive dataset. This study aimed to evaluate and compare the efficacy of three distinct machine learning methodologies – Bayesian Criterion, Logistic Regression, and Gradient Boosting – in predicting the risk associated with COVID-19 patients based on their symptoms, status, and medical history. This research is targeted at the process of patient state determination. The research subjects are machine learning methods for patient state determination. To achieve the aim of the research, the following tasks have been formulated: methods and models of the COVID-19 patients state determination should be analyzed; classification model of the patient state determination based on Bayes criterion should be developed; classification model of the patient state determination based on logistic regression should be developed; classification model of the patient state determination based on gradient boosting should be developed; the information system should be developed; the experimental study based on machine learning methods should be provided; and the results of the experimental study should be analyzed. Methods: using a dataset provided by the Mexican government, which encompasses over a million unique patients with 21 distinct features, we developed an information system in C# programming language. This system allows users to select their preferred method for risk calculation, offering a real-time decision-making tool for healthcare professionals. Results: All models demonstrated commendable accuracy levels. However, subtle differences in their performance metrics, such as sensitivity, precision, and the F1-score, were observed. The Gradient Boosting method slightly outperformed the other models in terms of overall accuracy. Conclusions: While each model showcased its merits, the choice of method should be based on the specific needs and constraints of the healthcare system. The Gradient Boosting method emerged as marginally superior in this study. This research underscores the potential of machine learning in enhancing pandemic response strategies, offering both scientific insights and practical tools for healthcare professionals.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135297032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}