Olesia Barkovska, I. Filippenko, Ivan Semenenko, Valentyn Korniienko, P. Sedlaček
The work is devoted to the topical problem at the intersection of communications theory, digital electronics and numerical analysis, namely the study of image processing methods implementation time on different architectures of computational devices, which are used for software and hardware acceleration. The subject of this article is the investigation of reconfigurable FPGA processing systems in the image processing area. The goal of this work is to create a reconfigurable FPGA-based image processing system and compare it with existing processing architectures. Task. To fulfill the requirements of this work, it is necessary to prepare a practical experiment as well as theoretical research of the proposed architecture; to investigate the process of creating a ZYNQ SoC-based image processing system; and to develop and benchmark the speed of execution for the given set of algorithms with the specific range of the picture resolution. Methods used: FPGA simulation, C++ parallel programming with OpenMP, NVIDIA CUDA, performance analysis tools. The result of this work is the development of a resilient SoC Zynq7000–based computing system with programmable logic and the possibility to load images to FPGA RAM using the resources of ARM core for further processing and output via HDMI video interface, which enables the change of PL configuration at any time during the processing process. Conclusions. The efficiency of the FPGA approach was compared with a parallel image processing method implementation with OpenMP and CUDA. An overview of the ZYNQ platform with specific details related to media processing is presented. The analysis of algorithm speed testing findings based on various outputs proved the advantage (of over 60 times) of hardware acceleration of image processing over software analogs. The obtained results may be used in the development of embedded SoC-based solutions that require acceleration of big data processing. Also, the achieved findings can be used during the process of finding a suitable embedded platform for a certain image-processing task, where high data throughput is one of the most desired requirements.
{"title":"Adaptation of FPGA architecture for accelerated image preprocessing","authors":"Olesia Barkovska, I. Filippenko, Ivan Semenenko, Valentyn Korniienko, P. Sedlaček","doi":"10.32620/reks.2023.2.08","DOIUrl":"https://doi.org/10.32620/reks.2023.2.08","url":null,"abstract":"The work is devoted to the topical problem at the intersection of communications theory, digital electronics and numerical analysis, namely the study of image processing methods implementation time on different architectures of computational devices, which are used for software and hardware acceleration. The subject of this article is the investigation of reconfigurable FPGA processing systems in the image processing area. The goal of this work is to create a reconfigurable FPGA-based image processing system and compare it with existing processing architectures. Task. To fulfill the requirements of this work, it is necessary to prepare a practical experiment as well as theoretical research of the proposed architecture; to investigate the process of creating a ZYNQ SoC-based image processing system; and to develop and benchmark the speed of execution for the given set of algorithms with the specific range of the picture resolution. Methods used: FPGA simulation, C++ parallel programming with OpenMP, NVIDIA CUDA, performance analysis tools. The result of this work is the development of a resilient SoC Zynq7000–based computing system with programmable logic and the possibility to load images to FPGA RAM using the resources of ARM core for further processing and output via HDMI video interface, which enables the change of PL configuration at any time during the processing process. Conclusions. The efficiency of the FPGA approach was compared with a parallel image processing method implementation with OpenMP and CUDA. An overview of the ZYNQ platform with specific details related to media processing is presented. The analysis of algorithm speed testing findings based on various outputs proved the advantage (of over 60 times) of hardware acceleration of image processing over software analogs. The obtained results may be used in the development of embedded SoC-based solutions that require acceleration of big data processing. Also, the achieved findings can be used during the process of finding a suitable embedded platform for a certain image-processing task, where high data throughput is one of the most desired requirements.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44865833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Fedorovych, O. Prokhorov, Yurii Pronchakov, Andrei Popov, Myroslav Momot
A multivariate task related to the modeling of high-tech enterprises relocation under new challenges and threats is described and solved. The relevance of the study is related to the complex solution of the complex task of moving a high-tech enterprise to a new location for the production of competitive products. The purpose of the publication is to present the created set of models that allow: to justify the choice of the location (locations) of the enterprise; to form a set of suppliers of components, considering long logistics chains; and to research the relocation of a high-tech enterprise in a special period related to martial law. The existing problems of the relocation of high-tech enterprises are analyzed: the change in the political and economic conditions of global and local production; existing long logistics supply chains of components that have vulnerabilities and are triggered when threats appear; the problem of the location of distributed production in a large area; economic losses due to complex distributed logistics of supply of components; many manufacturers of components that provide the main production process; the problem of relocation (evacuation) of enterprises in a special period, in the conditions of martial law. The model for choosing a new location for the enterprise is proposed, considering contradictory indicators: the cost (rent) of land plots for the location of the enterprise; territory preparation for the location of the enterprise; logistics costs for the enterprise moving; expenses for training (retraining) of workers; relocation project risks; etc. Taking into account the combinatorial nature of the task under consideration and the complexity of the location of the distributed enterprise (not in one, but in several locations), a model of rational placement of production was created. A method of choosing a set of suppliers of components for high-tech enterprises is developed; this method considers the length of logistics chains, the time spent on delivery, the quality of components produced by suppliers, and supply risks. A multi-criteria optimization model for choosing suppliers is created, considering some contradictory indicators. The model of relocation (evacuation) of a high-tech enterprise in a special period, in the conditions of wartime threats and risks of moving technological equipment, is proposed. A simulation model is developed to study the logistics of enterprise relocation in the form of an agent-based representation; this model simulates the events associated with the sequence of relocation actions: dismantling of technological equipment, transportation of equipment, and installation of enterprise subsystems. The emergence of threats and the consequences of their actions, which are associated with a violation of the logistics of moving the enterprise, are simulated. An illustrated example of the study of enterprise relocation in the conditions of the emergence of threats and the cessation of tech
{"title":"Modeling of the relocation of high-tech enterprises for the release of innovative products","authors":"O. Fedorovych, O. Prokhorov, Yurii Pronchakov, Andrei Popov, Myroslav Momot","doi":"10.32620/reks.2023.2.15","DOIUrl":"https://doi.org/10.32620/reks.2023.2.15","url":null,"abstract":"A multivariate task related to the modeling of high-tech enterprises relocation under new challenges and threats is described and solved. The relevance of the study is related to the complex solution of the complex task of moving a high-tech enterprise to a new location for the production of competitive products. The purpose of the publication is to present the created set of models that allow: to justify the choice of the location (locations) of the enterprise; to form a set of suppliers of components, considering long logistics chains; and to research the relocation of a high-tech enterprise in a special period related to martial law. The existing problems of the relocation of high-tech enterprises are analyzed: the change in the political and economic conditions of global and local production; existing long logistics supply chains of components that have vulnerabilities and are triggered when threats appear; the problem of the location of distributed production in a large area; economic losses due to complex distributed logistics of supply of components; many manufacturers of components that provide the main production process; the problem of relocation (evacuation) of enterprises in a special period, in the conditions of martial law. The model for choosing a new location for the enterprise is proposed, considering contradictory indicators: the cost (rent) of land plots for the location of the enterprise; territory preparation for the location of the enterprise; logistics costs for the enterprise moving; expenses for training (retraining) of workers; relocation project risks; etc. Taking into account the combinatorial nature of the task under consideration and the complexity of the location of the distributed enterprise (not in one, but in several locations), a model of rational placement of production was created. A method of choosing a set of suppliers of components for high-tech enterprises is developed; this method considers the length of logistics chains, the time spent on delivery, the quality of components produced by suppliers, and supply risks. A multi-criteria optimization model for choosing suppliers is created, considering some contradictory indicators. The model of relocation (evacuation) of a high-tech enterprise in a special period, in the conditions of wartime threats and risks of moving technological equipment, is proposed. A simulation model is developed to study the logistics of enterprise relocation in the form of an agent-based representation; this model simulates the events associated with the sequence of relocation actions: dismantling of technological equipment, transportation of equipment, and installation of enterprise subsystems. The emergence of threats and the consequences of their actions, which are associated with a violation of the logistics of moving the enterprise, are simulated. An illustrated example of the study of enterprise relocation in the conditions of the emergence of threats and the cessation of tech","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41770195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Apu Hosen, Shahadat Hoshen Moz, Md. Mahamudul Hasan Khalid, Sk. Shalauddin Kabir, Dr. Syed Md. Galib
The subject matter of the article is the design of an attendance system based on face recognition with anti-spoofing, system alarm, and Email Automation to improve accuracy and efficiency, highlighting its potential to revolutionize traditional attendance tracking methods. The administration of attendance might be a tremendous load on the authority if it is done manually. Therefore, the goal of this study is to design a reliable and efficient attendance system that can replace traditional manual approaches while also detecting and preventing spoofing attempts. Without the manual approach, attendance may be collected using many kinds of technologies, including biometric systems, radiofrequency card systems, and facial recognition systems. The face recognition attendance system stands out among the rest as a great alternative to the traditional attendance system used in offices and classrooms. The tasks to be accomplished include selecting appropriate facial detection and recognition technologies, implementing anti-spoofing measures to prevent intruders from exploiting the system, and integrating system alarms and email automation to improve accuracy and efficiency. The methods used include selecting the Haar cascade for facial detection and the LBPH algorithm for facial recognition, using DoG filtering with Haar for anti-spoofing, and implementing a speech system alarm for detecting intruders. The result of the system is a face recognition rate of 87 % and a false positive rate of 15 %. However, since the recognition rate is not 100 %, attendance will also be informed through email automation in case someone is present but is not detected by the system. In conclusion, the designed attendance system offers an effective and efficient alternative to the traditional attendance system used in offices and classrooms, providing accurate attendance records while also preventing spoofing attempts and notifying authorities of any intruders.
{"title":"Face recognition-based attendance system with anti-spoofing, system alert, and email automation","authors":"Md. Apu Hosen, Shahadat Hoshen Moz, Md. Mahamudul Hasan Khalid, Sk. Shalauddin Kabir, Dr. Syed Md. Galib","doi":"10.32620/reks.2023.2.10","DOIUrl":"https://doi.org/10.32620/reks.2023.2.10","url":null,"abstract":"The subject matter of the article is the design of an attendance system based on face recognition with anti-spoofing, system alarm, and Email Automation to improve accuracy and efficiency, highlighting its potential to revolutionize traditional attendance tracking methods. The administration of attendance might be a tremendous load on the authority if it is done manually. Therefore, the goal of this study is to design a reliable and efficient attendance system that can replace traditional manual approaches while also detecting and preventing spoofing attempts. Without the manual approach, attendance may be collected using many kinds of technologies, including biometric systems, radiofrequency card systems, and facial recognition systems. The face recognition attendance system stands out among the rest as a great alternative to the traditional attendance system used in offices and classrooms. The tasks to be accomplished include selecting appropriate facial detection and recognition technologies, implementing anti-spoofing measures to prevent intruders from exploiting the system, and integrating system alarms and email automation to improve accuracy and efficiency. The methods used include selecting the Haar cascade for facial detection and the LBPH algorithm for facial recognition, using DoG filtering with Haar for anti-spoofing, and implementing a speech system alarm for detecting intruders. The result of the system is a face recognition rate of 87 % and a false positive rate of 15 %. However, since the recognition rate is not 100 %, attendance will also be informed through email automation in case someone is present but is not detected by the system. In conclusion, the designed attendance system offers an effective and efficient alternative to the traditional attendance system used in offices and classrooms, providing accurate attendance records while also preventing spoofing attempts and notifying authorities of any intruders.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43544134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject of the article is to determine the degree of scientific and technical text connectedness using statistical calculations. The aim of the scientific investigation is to study the possibilities of using the coherence of fluctuations in the relative frequencies of keywords in paragraphs to determine the lexical coherence and thematic unity of scientific and technical texts. The task is to develop a method for determining the thematic unity of a text at the set of paragraphs level; to develop a method for determining the coherence of a text at the set of paragraphs level; and to test the developed methods on a collection of documents. The methods used are statistical analysis and computational experiment methods. The following results were obtained. The study has shown that it is advisable to cluster paragraphs as points in the keyword space to determine the degree of scientific and technical text coherence at the level of paragraphs. This opens up the possibility of calculating the degree of thematic unity within the clusters and in the entire text. The degree of text fragments and the whole text coherence is determined by analyzing the sequence of paragraph numbers in the clusters. This makes it possible to formally determine the quality of the material presented in a scientific and technical article or in a textbook. Conclusions. The scientific novelty of the study is as follows: there was refined on the method for determination of the connectedness degree (coherence and thematic unity) of scientific and technical texts at the level of paragraphs by implementation of paragraphs clustering in the keywords space, using the calculation of thematic unity degree inside the clusters and in the overall text, as well as through analysis of paragraphs numbers sequence in clusters in order to determine the degree of text fragments and the overall text coherence. The methods are language-independent, based on clear hypotheses, and complement each other. The methods have an adjusting element that can be used to adapt it to different thematic and stylistic areas. It has been experimentally proved that the proposed methods for the determination of scientific and technical text connectedness are efficient and can provide the framework for information technology of content analysis of scientific and technical texts. The proposed methods do not use WEB resources for syntactic and semantic analysis, providing the possibility to use them autonomously.
{"title":"Paragraph-oriented methods for determining the coherence and thematic unity of scientific and technical texts","authors":"Ihor Shevchenko, Pavlo Andreev, Maiia Dernova, Olena Poddubei","doi":"10.32620/reks.2023.2.03","DOIUrl":"https://doi.org/10.32620/reks.2023.2.03","url":null,"abstract":"The subject of the article is to determine the degree of scientific and technical text connectedness using statistical calculations. The aim of the scientific investigation is to study the possibilities of using the coherence of fluctuations in the relative frequencies of keywords in paragraphs to determine the lexical coherence and thematic unity of scientific and technical texts. The task is to develop a method for determining the thematic unity of a text at the set of paragraphs level; to develop a method for determining the coherence of a text at the set of paragraphs level; and to test the developed methods on a collection of documents. The methods used are statistical analysis and computational experiment methods. The following results were obtained. The study has shown that it is advisable to cluster paragraphs as points in the keyword space to determine the degree of scientific and technical text coherence at the level of paragraphs. This opens up the possibility of calculating the degree of thematic unity within the clusters and in the entire text. The degree of text fragments and the whole text coherence is determined by analyzing the sequence of paragraph numbers in the clusters. This makes it possible to formally determine the quality of the material presented in a scientific and technical article or in a textbook. Conclusions. The scientific novelty of the study is as follows: there was refined on the method for determination of the connectedness degree (coherence and thematic unity) of scientific and technical texts at the level of paragraphs by implementation of paragraphs clustering in the keywords space, using the calculation of thematic unity degree inside the clusters and in the overall text, as well as through analysis of paragraphs numbers sequence in clusters in order to determine the degree of text fragments and the overall text coherence. The methods are language-independent, based on clear hypotheses, and complement each other. The methods have an adjusting element that can be used to adapt it to different thematic and stylistic areas. It has been experimentally proved that the proposed methods for the determination of scientific and technical text connectedness are efficient and can provide the framework for information technology of content analysis of scientific and technical texts. The proposed methods do not use WEB resources for syntactic and semantic analysis, providing the possibility to use them autonomously.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42602209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Barannik, S. Shulgin, V. Kozlovskyi, R. Onyshchenko, T. Belikova, O. Ihnatiev, Viacheslav Khlopiachyi
The subject of research in this article is methods of encoding transformed video segments to reduce their bit volume without loss of information integrity. The goal is to develop a technology for coding uneven diagonal sequences under the conditions of their arbitrary positioning in the transformant. Task: to justify the approach of creating new methods of encoding video segments, considering the features of the combinatorial configuration of transformants; to create a method of formatting the coordinate system of spectral components in an uneven diagonal direction; to develop a method of encoding non-uniform diagonal sequences in a two-dimensional spectral space; to build a technology for recurrent realization of the process of sliding truncated-positional coding of uneven-diagonal sequences. The methods used are: mathematical models for estimating the number of structural-combinatorial and psychovisual-combinatorial redundancies in an uneven-diagonal spectral space; methods of positional coding. The following results were obtained. Potential advantages of considering the combinatorial configuration of the transformant based on its reformatting according to the non-uniform diagonal structure are substantiated. The technology of recurrent truncated positional coding of video segments in non-uniform diagonal space has been developed. It is based on two technological components. These include: the first component - a pyramidal system of positioning diagonals and their component in the transformant is created. The second component - a method of recurrent implementation of truncated-positional coding of uneven-diagonal sequences, is constructed. Such coding is organized regardless of the positioning of the diagonals in the two-dimensional spectral space of the transformant. Comparative evaluation revealed the advantages of the created method over standardized transformant coding methods. The advantage is achieved by the level of bit volume reduction and reaches an average of 15-30 %. Conclusions. For the first time, a method for establishing the coordinates of the components in the diagonals was developed. It is based on considering the features of the structural configuration of the transformant. This creates conditions for reducing time delays for processing video segments. For the first time, a method for recurrent coding of diagonals based on truncated-positional systems was created. This makes it possible to avoid the cases of the violation of the conditions of mutually unambiguous code conversion.
{"title":"Method of recurrent truncated-positional coding video segments in uneven diagonal space","authors":"V. Barannik, S. Shulgin, V. Kozlovskyi, R. Onyshchenko, T. Belikova, O. Ihnatiev, Viacheslav Khlopiachyi","doi":"10.32620/reks.2023.2.11","DOIUrl":"https://doi.org/10.32620/reks.2023.2.11","url":null,"abstract":"The subject of research in this article is methods of encoding transformed video segments to reduce their bit volume without loss of information integrity. The goal is to develop a technology for coding uneven diagonal sequences under the conditions of their arbitrary positioning in the transformant. Task: to justify the approach of creating new methods of encoding video segments, considering the features of the combinatorial configuration of transformants; to create a method of formatting the coordinate system of spectral components in an uneven diagonal direction; to develop a method of encoding non-uniform diagonal sequences in a two-dimensional spectral space; to build a technology for recurrent realization of the process of sliding truncated-positional coding of uneven-diagonal sequences. The methods used are: mathematical models for estimating the number of structural-combinatorial and psychovisual-combinatorial redundancies in an uneven-diagonal spectral space; methods of positional coding. The following results were obtained. Potential advantages of considering the combinatorial configuration of the transformant based on its reformatting according to the non-uniform diagonal structure are substantiated. The technology of recurrent truncated positional coding of video segments in non-uniform diagonal space has been developed. It is based on two technological components. These include: the first component - a pyramidal system of positioning diagonals and their component in the transformant is created. The second component - a method of recurrent implementation of truncated-positional coding of uneven-diagonal sequences, is constructed. Such coding is organized regardless of the positioning of the diagonals in the two-dimensional spectral space of the transformant. Comparative evaluation revealed the advantages of the created method over standardized transformant coding methods. The advantage is achieved by the level of bit volume reduction and reaches an average of 15-30 %. Conclusions. For the first time, a method for establishing the coordinates of the components in the diagonals was developed. It is based on considering the features of the structural configuration of the transformant. This creates conditions for reducing time delays for processing video segments. For the first time, a method for recurrent coding of diagonals based on truncated-positional systems was created. This makes it possible to avoid the cases of the violation of the conditions of mutually unambiguous code conversion.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48424543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Fedorenko, H. Fesenko, V. Kharchenko, I. Kliushnikov, Ihor Tolkunov
The subject of this study is systems for detection and identification (D&I) of explosive ordnance (EO). The aim of this study is to develop a concept, general structure, and models of a robotic-biological system for D&I of EO (RBS-D&I). The objectives are as follows: 1) to classify mobile systems for D&I of EO and suggest a concept of RBS-D&I; 2) to develop the general structure of RBS-D&I consisting of robotic (flying and ground) and biological subsystems; 3) to develop models of RBS-D&I including automaton, hierarchical, and operational ones; 4) to describe tasks and planned results of the article-related scientific project; and 5) to discuss research results. The following results were obtained. 1) The general structure of the RBS-D&I. The structure comprises the following levels: control and processing centres (mobile ground control and processing centre (MGCPC) and virtual control and processing centre); forces for detection and identification (fleet of unmanned aerial vehicles (FoU), biological detection information subsystem (BDIS), and robotic detection information subsystem (RDIS)); interference; natural covers and a bedding surface; and target objects (all munitions containing explosives, nuclear fission or fusion materials and biological and chemical agents). 2) A concept of RBS-D&I. The concept is based on RBS-D&I description, analysis, development, and operation as an integrated complex cyber-physical and cyber-biological system running in changing physical and information environments. 3) The RBS-D&I automata model. The model describes RBS-D&I operating in two modes. In mode 1, FoU and BDIS operate separately and interact through the MGCPC only. In mode 2, depending on the specifics of the tasks performed, FoU and RDIS can directly interact among themselves or through the MGCPC. 4) hierarchical model. The model has two sets of vertices: EO detection and platforms equipped with the necessary sensors. 5) An operational cycle model. The model describes land release operations via a methodology of functional modeling and graphic description of IDEF0 processes. Conclusions. The proposed concept and RBS-D&I solutions can provide high-performance and guaranteed EO detection in designated areas by the implementation of an intelligent platform and tools for planning the use of multifunctional fleets of UAVs and other RBS-D&I subsystems.
{"title":"Robotic-biological systems for detection and identification of explosive ordnance: concept, general structure, and models","authors":"G. Fedorenko, H. Fesenko, V. Kharchenko, I. Kliushnikov, Ihor Tolkunov","doi":"10.32620/reks.2023.2.12","DOIUrl":"https://doi.org/10.32620/reks.2023.2.12","url":null,"abstract":"The subject of this study is systems for detection and identification (D&I) of explosive ordnance (EO). The aim of this study is to develop a concept, general structure, and models of a robotic-biological system for D&I of EO (RBS-D&I). The objectives are as follows: 1) to classify mobile systems for D&I of EO and suggest a concept of RBS-D&I; 2) to develop the general structure of RBS-D&I consisting of robotic (flying and ground) and biological subsystems; 3) to develop models of RBS-D&I including automaton, hierarchical, and operational ones; 4) to describe tasks and planned results of the article-related scientific project; and 5) to discuss research results. The following results were obtained. 1) The general structure of the RBS-D&I. The structure comprises the following levels: control and processing centres (mobile ground control and processing centre (MGCPC) and virtual control and processing centre); forces for detection and identification (fleet of unmanned aerial vehicles (FoU), biological detection information subsystem (BDIS), and robotic detection information subsystem (RDIS)); interference; natural covers and a bedding surface; and target objects (all munitions containing explosives, nuclear fission or fusion materials and biological and chemical agents). 2) A concept of RBS-D&I. The concept is based on RBS-D&I description, analysis, development, and operation as an integrated complex cyber-physical and cyber-biological system running in changing physical and information environments. 3) The RBS-D&I automata model. The model describes RBS-D&I operating in two modes. In mode 1, FoU and BDIS operate separately and interact through the MGCPC only. In mode 2, depending on the specifics of the tasks performed, FoU and RDIS can directly interact among themselves or through the MGCPC. 4) hierarchical model. The model has two sets of vertices: EO detection and platforms equipped with the necessary sensors. 5) An operational cycle model. The model describes land release operations via a methodology of functional modeling and graphic description of IDEF0 processes. Conclusions. The proposed concept and RBS-D&I solutions can provide high-performance and guaranteed EO detection in designated areas by the implementation of an intelligent platform and tools for planning the use of multifunctional fleets of UAVs and other RBS-D&I subsystems.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45992274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaafar Ahmed, Andrii Karpenko, Olga Tarasyuk, Anatoliy Gorbenko, Akbar Sheikh-Akbari
Distributed replicated databases play a crucial role in modern computer systems enabling scalable, fault-tolerant, and high-performance data management. However, achieving these qualities requires resolving a number of trade-offs between various properties during system design and operation. This paper reviews trade-offs in distributed replicated databases and provides a survey of recent research papers studying distributed data storage. The paper first discusses a compromise between consistency and latency that appears in distributed replicated data storages and directly follows from CAP and PACELC theorems. Consistency refers to the guarantee that all clients in a distributed system observe the same data at the same time. To ensure strong consistency, distributed systems typically employ coordination mechanisms and synchronization protocols that involve communication and agreement among distributed replicas. These mechanisms introduce additional overhead and latency and can dramatically increase the time taken to complete operations when replicas are globally distributed across the Internet. In addition, we study trade-offs between other system properties including availability, durability, cost, energy consumption, read and write latency, etc. In this paper we also provide a comprehensive review and classification of recent research works in distributed replicated databases. Reviewed papers showcase several major areas of research, ranging from performance evaluation and comparison of various NoSQL databases to suggest new strategies for data replication and putting forward new consistency models. In particular, we observed a shift towards exploring hybrid consistency models of causal consistency and eventual consistency with causal ordering due to their ability to strike a balance between operations ordering guarantees and high performance. Researchers have also proposed various consistency control algorithms and consensus quorum protocols to coordinate distributed replicas. Insights from this review can empower practitioners to make informed decisions in designing and managing distributed data storage systems as well as help identify existing gaps in the body of knowledge and suggest further research directions.
{"title":"Consistency issue and related trade-offs in distributed replicated systems and databases: a review","authors":"Jaafar Ahmed, Andrii Karpenko, Olga Tarasyuk, Anatoliy Gorbenko, Akbar Sheikh-Akbari","doi":"10.32620/reks.2023.2.14","DOIUrl":"https://doi.org/10.32620/reks.2023.2.14","url":null,"abstract":"Distributed replicated databases play a crucial role in modern computer systems enabling scalable, fault-tolerant, and high-performance data management. However, achieving these qualities requires resolving a number of trade-offs between various properties during system design and operation. This paper reviews trade-offs in distributed replicated databases and provides a survey of recent research papers studying distributed data storage. The paper first discusses a compromise between consistency and latency that appears in distributed replicated data storages and directly follows from CAP and PACELC theorems. Consistency refers to the guarantee that all clients in a distributed system observe the same data at the same time. To ensure strong consistency, distributed systems typically employ coordination mechanisms and synchronization protocols that involve communication and agreement among distributed replicas. These mechanisms introduce additional overhead and latency and can dramatically increase the time taken to complete operations when replicas are globally distributed across the Internet. In addition, we study trade-offs between other system properties including availability, durability, cost, energy consumption, read and write latency, etc. In this paper we also provide a comprehensive review and classification of recent research works in distributed replicated databases. Reviewed papers showcase several major areas of research, ranging from performance evaluation and comparison of various NoSQL databases to suggest new strategies for data replication and putting forward new consistency models. In particular, we observed a shift towards exploring hybrid consistency models of causal consistency and eventual consistency with causal ordering due to their ability to strike a balance between operations ordering guarantees and high performance. Researchers have also proposed various consistency control algorithms and consensus quorum protocols to coordinate distributed replicas. Insights from this review can empower practitioners to make informed decisions in designing and managing distributed data storage systems as well as help identify existing gaps in the body of knowledge and suggest further research directions.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136345884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Ahsan Habib, Romana Rahman Ema, Tajul Islam, Md. Yasir Arafat, M. Hasan
The choice of this study has a significant impact on daily life. In various fields such as journalism, academia, business, and more, large amounts of text need to be processed quickly and efficiently. Text summarization is a technique used to generate a precise and shortened summary of spacious texts. The generated summary sustains overall meaning without losing any information and focuses on those parts that contain useful information. The goal is to develop a model that converts lengthy articles into concise versions. The task to be solved is to select an effective procedure to develop the model. Although the present text summarization models give us good results in many recognized datasets such as cnn/daily- mail, newsroom, etc. All the problems can not be resolved by these models. In this paper, a new text summarization method has been proposed: combining the Extractive and Abstractive Text Summarization technique. In the extractive-based method, the model generates a summary using Sentence Ranking Algorithm and passes this generated summary through an abstractive method. When using the sentence ranking algorithm, after rearranging the sentences, the relationship between one sentence and another sentence is destroyed. To overcome this situation, Pronoun to Noun conversion has been proposed with the new system. After generating the extractive summary, the generated summary is passed through the abstractive method. The proposed abstractive model consists of three pre-trained models: google/pegusus-xsum, face-book/bart-large-cnn model, and Yale-LILY/brio-cnndm-uncased, which generates a final summary depending on the maximum final score. The following results were obtained: experimental results on CNN/daily-mail dataset show that the proposed model obtained scores of ROUGE-1, ROUGE-2 and ROUGE-L are respectively 42.67 %, 19.35 %, and 39.57 %. Then, the result has been compared with three state-of-the-art methods: JEANS, DEATS and PGAN-ATSMT. The results outperform state-of-the-art models. Experimental results also show that the proposed model is qualitatively readable and can generate abstract summaries. Conclusion: In terms of ROUGE score, the model outperforms some art-of-the-state models for ROUGE-1 and ROUGE-L, but doesn’t achieve good result in ROUGE-2.
{"title":"Automatic text summarization based on extractive-abstractive method","authors":"Md. Ahsan Habib, Romana Rahman Ema, Tajul Islam, Md. Yasir Arafat, M. Hasan","doi":"10.32620/reks.2023.2.01","DOIUrl":"https://doi.org/10.32620/reks.2023.2.01","url":null,"abstract":"The choice of this study has a significant impact on daily life. In various fields such as journalism, academia, business, and more, large amounts of text need to be processed quickly and efficiently. Text summarization is a technique used to generate a precise and shortened summary of spacious texts. The generated summary sustains overall meaning without losing any information and focuses on those parts that contain useful information. The goal is to develop a model that converts lengthy articles into concise versions. The task to be solved is to select an effective procedure to develop the model. Although the present text summarization models give us good results in many recognized datasets such as cnn/daily- mail, newsroom, etc. All the problems can not be resolved by these models. In this paper, a new text summarization method has been proposed: combining the Extractive and Abstractive Text Summarization technique. In the extractive-based method, the model generates a summary using Sentence Ranking Algorithm and passes this generated summary through an abstractive method. When using the sentence ranking algorithm, after rearranging the sentences, the relationship between one sentence and another sentence is destroyed. To overcome this situation, Pronoun to Noun conversion has been proposed with the new system. After generating the extractive summary, the generated summary is passed through the abstractive method. The proposed abstractive model consists of three pre-trained models: google/pegusus-xsum, face-book/bart-large-cnn model, and Yale-LILY/brio-cnndm-uncased, which generates a final summary depending on the maximum final score. The following results were obtained: experimental results on CNN/daily-mail dataset show that the proposed model obtained scores of ROUGE-1, ROUGE-2 and ROUGE-L are respectively 42.67 %, 19.35 %, and 39.57 %. Then, the result has been compared with three state-of-the-art methods: JEANS, DEATS and PGAN-ATSMT. The results outperform state-of-the-art models. Experimental results also show that the proposed model is qualitatively readable and can generate abstract summaries. Conclusion: In terms of ROUGE score, the model outperforms some art-of-the-state models for ROUGE-1 and ROUGE-L, but doesn’t achieve good result in ROUGE-2.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45808964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Viunytskyi, V. Lukin, A. Totsky, V. Shulgin, Nadejda Kozhemiakina
High blood pressure (BP) or hypertension is an extremely common and dangerous condition affecting more than 18–27 % of the world population. It causes many cardiovascular diseases that kill 7.6 million people around the world per year. The most accurate way to detect hypertension is ambulatory monitoring of blood pressure lasting up to 24 h and even more. Traditional non-invasive methods for measuring BP are oscillometric and auscultatory, they use an occlusal cuff as an external pressure source. Unfortunately, cuffed BP measurement creates some inconvenience for the patient and can be inaccurate due to incorrect cuff placement. In connection with the problems caused by cuff methods, it has become necessary to develop cuffless methods for measuring blood pressure, which are based on the relationship of blood pressure with various manifestations of cardiac activity and hemodynamics, which can be recorded and measured non-invasively, without the use of a compression cuff and with simple technical means. Over the past decade, there have been many publications devoted to estimating blood pressure based on pulse wave velocity (PWV) or pulse wave transit time (PTT). However, this approach has few disadvantages. First, the measurement of BP using only PTT parameter is valid only for a given patient. Second, the linear model of the relationship between BP and PTT is valid only in a small range of BP variations. To solve this problem neural networks or linear regression models were used. The main problem with this approach is the accuracy of blood pressure measurement. This study builds one feed-forward neural network (FFNN) for determining systolic and diastolic blood pressure based on features extracted from electrocardiography (ECG) and photoplethysmography (PPG) signals without a cuff and calibration procedure. The novelty of this work is the discovery of five new key points of the PPG signal, as well as the calculation of nine new features of the ECG and PPG signals, which improve the accuracy of blood pressure measurement. The object of the study was the ECG and PPG signals recorded from the patient's hand. The target of the study was to obtain systolic and diastolic blood pressure based on an FFNN, the input arguments of which are the parameters of the ECG and PPG signals. Algorithms for estimating signal parameters based on the determination of characteristic points in the PPG signal, the position of R-peaks in the ECG signal, and parameters calculated from the relationship of time parameters and amplitude ratios of these signals are described in detail. The Pearson correlation coefficients for these parameters and BP are determined, which helps to choose the set of signal parameters valuable for BP estimation. The results obtained show that the mean absolute error ± standard deviation for systolic and diastolic BP is equal to 1.72±3.008 mmHg and 1.101±1.9 mmHg, respectively; the correlation coefficients for the estimated and true BP are equal to
{"title":"Continuous cuffless blood pressure measurement using feed-forward neural network","authors":"O. Viunytskyi, V. Lukin, A. Totsky, V. Shulgin, Nadejda Kozhemiakina","doi":"10.32620/reks.2023.2.04","DOIUrl":"https://doi.org/10.32620/reks.2023.2.04","url":null,"abstract":"High blood pressure (BP) or hypertension is an extremely common and dangerous condition affecting more than 18–27 % of the world population. It causes many cardiovascular diseases that kill 7.6 million people around the world per year. The most accurate way to detect hypertension is ambulatory monitoring of blood pressure lasting up to 24 h and even more. Traditional non-invasive methods for measuring BP are oscillometric and auscultatory, they use an occlusal cuff as an external pressure source. Unfortunately, cuffed BP measurement creates some inconvenience for the patient and can be inaccurate due to incorrect cuff placement. In connection with the problems caused by cuff methods, it has become necessary to develop cuffless methods for measuring blood pressure, which are based on the relationship of blood pressure with various manifestations of cardiac activity and hemodynamics, which can be recorded and measured non-invasively, without the use of a compression cuff and with simple technical means. Over the past decade, there have been many publications devoted to estimating blood pressure based on pulse wave velocity (PWV) or pulse wave transit time (PTT). However, this approach has few disadvantages. First, the measurement of BP using only PTT parameter is valid only for a given patient. Second, the linear model of the relationship between BP and PTT is valid only in a small range of BP variations. To solve this problem neural networks or linear regression models were used. The main problem with this approach is the accuracy of blood pressure measurement. This study builds one feed-forward neural network (FFNN) for determining systolic and diastolic blood pressure based on features extracted from electrocardiography (ECG) and photoplethysmography (PPG) signals without a cuff and calibration procedure. The novelty of this work is the discovery of five new key points of the PPG signal, as well as the calculation of nine new features of the ECG and PPG signals, which improve the accuracy of blood pressure measurement. The object of the study was the ECG and PPG signals recorded from the patient's hand. The target of the study was to obtain systolic and diastolic blood pressure based on an FFNN, the input arguments of which are the parameters of the ECG and PPG signals. Algorithms for estimating signal parameters based on the determination of characteristic points in the PPG signal, the position of R-peaks in the ECG signal, and parameters calculated from the relationship of time parameters and amplitude ratios of these signals are described in detail. The Pearson correlation coefficients for these parameters and BP are determined, which helps to choose the set of signal parameters valuable for BP estimation. The results obtained show that the mean absolute error ± standard deviation for systolic and diastolic BP is equal to 1.72±3.008 mmHg and 1.101±1.9 mmHg, respectively; the correlation coefficients for the estimated and true BP are equal to ","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46595056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Walidainy, Nawal Nashirah, Ramzi Adriman, Y. Away, N. Nasaruddin
Wireless technology is expected to undergo considerable transformation because of the numerous services offered by 6G communication networks, which virtually entirely encompass every part of everyday life and use a variety of devices. Channel modeling is an essential factor in designing 6G communication networks. To meet the channel requirements of future 6G communication networks, it is crucial to measure the channel to consider path loss, multi-band, fading, blocking effect, multipath clustering, transmitter, and receiver moving speed/direction/time. The goal of this paper is to design and evaluate a 6G communication network in Banda Aceh City using a statistical channel model. The channel model is associated with environmental conditions such as rainfall and humidity. The method is then based on computer simulation using the NYUSIM simulator to complete the channel modeling using an operating frequency of 95 GHz with a bandwidth of 800 MHz. In the simulator, the designed 6G channel model is evaluated in both line-of-sight (LOS) and non-line-of-sight (NLOS) network environments. In addition, the designated network parameters, such as the coverage area, angle of arrival (AoA), angle of departure (AoD), and power delay profile (PDP), are simulated. The results, at AoA, the value of received power for LOS conditions ranges from -86 dBm to -101 dBm, while the value for NLOS conditions ranges from -91 dBm to -111 dBm. Under LOS conditions, the received power for AoD ranges from -86 dBm to -101 dBm, whereas under NLOS conditions, it ranges from -91 dBm to -111 dBm. In the omnidirectional PDP, the pathloss value for the LOS condition is 99.8 dB and the delay is 17.9 ns, while the pathloss value for the NLOS condition is 104.2 dB with a delay of 28.1 ns. For the directional PDP, the LOS condition yields a path loss of 106.4 dB and a delay of 2.9 ns, while the NLOS condition yields a path loss of 110.5 dB and a delay of 3 ns. Conclusions. The simulation indicated that the AoA, AoD, and PDP in terms of received power, pathloss, and propagation delay are in acceptable conditions for a 6G network in Banda Aceh City in the two observed environments. Therefore, it is conceivable to establish a 6G network in Banda Aceh City in the future.
{"title":"Statistical channel model for 6G communication networks in Banda Aceh City","authors":"H. Walidainy, Nawal Nashirah, Ramzi Adriman, Y. Away, N. Nasaruddin","doi":"10.32620/reks.2023.2.06","DOIUrl":"https://doi.org/10.32620/reks.2023.2.06","url":null,"abstract":"Wireless technology is expected to undergo considerable transformation because of the numerous services offered by 6G communication networks, which virtually entirely encompass every part of everyday life and use a variety of devices. Channel modeling is an essential factor in designing 6G communication networks. To meet the channel requirements of future 6G communication networks, it is crucial to measure the channel to consider path loss, multi-band, fading, blocking effect, multipath clustering, transmitter, and receiver moving speed/direction/time. The goal of this paper is to design and evaluate a 6G communication network in Banda Aceh City using a statistical channel model. The channel model is associated with environmental conditions such as rainfall and humidity. The method is then based on computer simulation using the NYUSIM simulator to complete the channel modeling using an operating frequency of 95 GHz with a bandwidth of 800 MHz. In the simulator, the designed 6G channel model is evaluated in both line-of-sight (LOS) and non-line-of-sight (NLOS) network environments. In addition, the designated network parameters, such as the coverage area, angle of arrival (AoA), angle of departure (AoD), and power delay profile (PDP), are simulated. The results, at AoA, the value of received power for LOS conditions ranges from -86 dBm to -101 dBm, while the value for NLOS conditions ranges from -91 dBm to -111 dBm. Under LOS conditions, the received power for AoD ranges from -86 dBm to -101 dBm, whereas under NLOS conditions, it ranges from -91 dBm to -111 dBm. In the omnidirectional PDP, the pathloss value for the LOS condition is 99.8 dB and the delay is 17.9 ns, while the pathloss value for the NLOS condition is 104.2 dB with a delay of 28.1 ns. For the directional PDP, the LOS condition yields a path loss of 106.4 dB and a delay of 2.9 ns, while the NLOS condition yields a path loss of 110.5 dB and a delay of 3 ns. Conclusions. The simulation indicated that the AoA, AoD, and PDP in terms of received power, pathloss, and propagation delay are in acceptable conditions for a 6G network in Banda Aceh City in the two observed environments. Therefore, it is conceivable to establish a 6G network in Banda Aceh City in the future.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42419281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}