The subject of the study is the process of Information and Measuring System (IMS) designing as a component of the Cyber-Physical System (CPS) in the paradigm of Industry 4.0 (I4.0). The aim of the study is to develop methodological support for the design of IMS and Automatic Control Systems (ACS) as components of production CPS (CPPS), in particular, for Digital Factory. Objectives: to determine the conceptual model of IMS; choose the IoT model for structural synthesis, select the appropriate regulatory (sets of standards and implementation models) and hardware; perform R&D of IMS based on NB-IoT sensors; formalize the procedure for integrating components into the CPPS, develop the Asset Administration Shell. The methods used are: heuristic synthesis methods, experimental planning theory. The following results were obtained. The key role of optimally designed IMS level 4.0 in increased decision-making accuracy in CPPS management and control processes is demonstrated. The quality of control is improved both by quickly obtaining accurate information for updating models in cyber add-ons, and at the physical level, in ACS. The universal model of IMS implementation in CPPS was proposed. The stages of choosing the concept, structure, hardware and communication protocols of the IIoT ecosystem IMS + ACS have been performed. The methodology was tested during the development of the NB-IoT Tech remote monitoring system, which has a decentralized structure for collection data on resources consumed. The integration of the ecosystem as a component of CPPS at the appropriate levels of the architectural model RAMI 4.0 has been performed. Regulatory support has been formed and the functional aspect of the Asset Administrative Shell for CPPS integration has been developed. Conclusions. Scientific novelty: it is proposed to design the IMS as a component of the AAS of the cyber-physical system, according to the implementation methodology of its subsystems at the corresponding levels of RAMI4.0 and the selected IoT model. The new approach, called "soft digitalization", combines the approaches of Industry 3.0 and 4.0, it is designed for the sustainable development of automated systems to the level of cyber-physical systems and is relevant for the recovery of the economy of Ukraine. Practical significance of the results: the IoT-Tech system based on Smart sensors has been developed and tested. This information and measurement system is non-volatile and adapted to measure any parameters in automated systems of various levels of digitization.
{"title":"Design of information and measurement systems within the Industry 4.0 paradigm","authors":"O. Vasylenko, Sergii Ivchenko, Hennadii Snizhnoi","doi":"10.32620/reks.2023.1.04","DOIUrl":"https://doi.org/10.32620/reks.2023.1.04","url":null,"abstract":"The subject of the study is the process of Information and Measuring System (IMS) designing as a component of the Cyber-Physical System (CPS) in the paradigm of Industry 4.0 (I4.0). The aim of the study is to develop methodological support for the design of IMS and Automatic Control Systems (ACS) as components of production CPS (CPPS), in particular, for Digital Factory. Objectives: to determine the conceptual model of IMS; choose the IoT model for structural synthesis, select the appropriate regulatory (sets of standards and implementation models) and hardware; perform R&D of IMS based on NB-IoT sensors; formalize the procedure for integrating components into the CPPS, develop the Asset Administration Shell. The methods used are: heuristic synthesis methods, experimental planning theory. The following results were obtained. The key role of optimally designed IMS level 4.0 in increased decision-making accuracy in CPPS management and control processes is demonstrated. The quality of control is improved both by quickly obtaining accurate information for updating models in cyber add-ons, and at the physical level, in ACS. The universal model of IMS implementation in CPPS was proposed. The stages of choosing the concept, structure, hardware and communication protocols of the IIoT ecosystem IMS + ACS have been performed. The methodology was tested during the development of the NB-IoT Tech remote monitoring system, which has a decentralized structure for collection data on resources consumed. The integration of the ecosystem as a component of CPPS at the appropriate levels of the architectural model RAMI 4.0 has been performed. Regulatory support has been formed and the functional aspect of the Asset Administrative Shell for CPPS integration has been developed. Conclusions. Scientific novelty: it is proposed to design the IMS as a component of the AAS of the cyber-physical system, according to the implementation methodology of its subsystems at the corresponding levels of RAMI4.0 and the selected IoT model. The new approach, called \"soft digitalization\", combines the approaches of Industry 3.0 and 4.0, it is designed for the sustainable development of automated systems to the level of cyber-physical systems and is relevant for the recovery of the economy of Ukraine. Practical significance of the results: the IoT-Tech system based on Smart sensors has been developed and tested. This information and measurement system is non-volatile and adapted to measure any parameters in automated systems of various levels of digitization.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47008334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Buhaiov, V. Kliaznyka, Ihor Kozyura, Denys Zavhorodnii
The subject of this article is the process of detection and estimation frequency boundaries of spectrum holes under conditions of high spectrum occupancy when using receivers with narrow instantaneous bandwidths. The work increases the probability of spectrum holes correct detection in conditions of high occupancy of the radio frequency spectrum and variable noise levels by developing a method to distinguish signal and noise samples based on the analysis of the histogram of spectral sample modes. The tasks to be solved are: development of a method for separation signal and noise samples in the frequency domain; development of a methodology to find the minimum mode of a multimodal probability distribution; determination of frequency boundaries of spectrum holes; formulation of recommendations for the practical implementation of developed method. The methods used are: methods of probability theory and mathematical statistics, methods of statistical modeling. The essence of the proposed method is to distinguish the set of energy spectrum samples using a threshold, obtained for the value of the histogram mode, which corresponds to noise, and to determine the frequency boundaries of spectrum holes. The following results were obtained: an expression for calculating the threshold value for separation signal and noise samples in the frequency domain using the value, which corresponds to the noise mode of the frequency samples of the probability density function. It was found that the noise mode has the smallest value among other modes, since noise samples have a smaller value compared to the signal ones. A technique for estimating the value of the noise mode has been developed, which consists of a histogram of energy spectrum frequency samples and finding the partition interval that corresponds to the value of the minimal mode. An approach was proposed to determine the frequency boundaries of noise samples in the presence of one signal in the analyzed band. Conclusions. The developed method allows detecting spectrum holes with a probability of at least 0.9 at signal-to-noise ratio values of at least 5 dB for the spectrum with a rectangular shape envelope and 12 dB for other envelopes under occupancy of up to 80%.
{"title":"Method for spectrum holes detection based on mode analysis of spectral samples histogram","authors":"M. Buhaiov, V. Kliaznyka, Ihor Kozyura, Denys Zavhorodnii","doi":"10.32620/reks.2022.4.08","DOIUrl":"https://doi.org/10.32620/reks.2022.4.08","url":null,"abstract":"The subject of this article is the process of detection and estimation frequency boundaries of spectrum holes under conditions of high spectrum occupancy when using receivers with narrow instantaneous bandwidths. The work increases the probability of spectrum holes correct detection in conditions of high occupancy of the radio frequency spectrum and variable noise levels by developing a method to distinguish signal and noise samples based on the analysis of the histogram of spectral sample modes. The tasks to be solved are: development of a method for separation signal and noise samples in the frequency domain; development of a methodology to find the minimum mode of a multimodal probability distribution; determination of frequency boundaries of spectrum holes; formulation of recommendations for the practical implementation of developed method. The methods used are: methods of probability theory and mathematical statistics, methods of statistical modeling. The essence of the proposed method is to distinguish the set of energy spectrum samples using a threshold, obtained for the value of the histogram mode, which corresponds to noise, and to determine the frequency boundaries of spectrum holes. The following results were obtained: an expression for calculating the threshold value for separation signal and noise samples in the frequency domain using the value, which corresponds to the noise mode of the frequency samples of the probability density function. It was found that the noise mode has the smallest value among other modes, since noise samples have a smaller value compared to the signal ones. A technique for estimating the value of the noise mode has been developed, which consists of a histogram of energy spectrum frequency samples and finding the partition interval that corresponds to the value of the minimal mode. An approach was proposed to determine the frequency boundaries of noise samples in the presence of one signal in the analyzed band. Conclusions. The developed method allows detecting spectrum holes with a probability of at least 0.9 at signal-to-noise ratio values of at least 5 dB for the spectrum with a rectangular shape envelope and 12 dB for other envelopes under occupancy of up to 80%.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49431106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Badi, Imad Badi, K. E. Moutaouakil, Aziz Khamjane, Abdelkhalek Bahri
The global impact of COVID-19 has been significant and several vaccines have been developed to combat this virus. However, these vaccines have varying levels of efficacy and effectiveness in preventing illness and providing immunity. As the world continues to grapple with the ongoing pandemic, the development and distribution of effective vaccines remains a top priority, making monitoring prevention strategies mandatory and necessary to mitigate the spread of the disease. These vaccines have raised a huge debate on social networks and in the media about their effectiveness and secondary effects. This has generated big data, requiring intelligent tools capable of analyzing these data in depth and extracting the underlying knowledge and feelings. There is a scarcity of works that analyze feelings and the prediction of these feelings based on their estimated polarities at the same time. In this work, first, we use big data and Natural Language Processing (NLP) tools to extract the entities expressed in tweets about AstraZeneca and Pfizer and estimate their polarities; second, we use a Long Short-Term Memory (LSTM) neural network to predict the polarities of these two vaccines in the future. To ensure parallel data treatment for large-scale processing via clustered systems, we use the Apache Spark Framework (ASF) which enables the treatment of massive amounts of data in a distributed way. Results showed that the Pfizer vaccine is more popular and trustworthy than AstraZeneca. Additionally, according to the predictions generated by Long Short-Term Memory (LSTM) model, it is likely that Pfizer will continue to maintain its strong market position in the foreseeable future. These predictive analytics, which uses advanced machine learning techniques, have proven to be accurate in forecasting trends and identifying patterns in data. As such, we have confidence in the LSTM's prediction of Pfizer's ongoing dominance in the industry.
{"title":"Sentiment analysis and prediction of polarity vaccines based on Twitter data using deep NLP techniques","authors":"H. Badi, Imad Badi, K. E. Moutaouakil, Aziz Khamjane, Abdelkhalek Bahri","doi":"10.32620/reks.2022.4.02","DOIUrl":"https://doi.org/10.32620/reks.2022.4.02","url":null,"abstract":"The global impact of COVID-19 has been significant and several vaccines have been developed to combat this virus. However, these vaccines have varying levels of efficacy and effectiveness in preventing illness and providing immunity. As the world continues to grapple with the ongoing pandemic, the development and distribution of effective vaccines remains a top priority, making monitoring prevention strategies mandatory and necessary to mitigate the spread of the disease. These vaccines have raised a huge debate on social networks and in the media about their effectiveness and secondary effects. This has generated big data, requiring intelligent tools capable of analyzing these data in depth and extracting the underlying knowledge and feelings. There is a scarcity of works that analyze feelings and the prediction of these feelings based on their estimated polarities at the same time. In this work, first, we use big data and Natural Language Processing (NLP) tools to extract the entities expressed in tweets about AstraZeneca and Pfizer and estimate their polarities; second, we use a Long Short-Term Memory (LSTM) neural network to predict the polarities of these two vaccines in the future. To ensure parallel data treatment for large-scale processing via clustered systems, we use the Apache Spark Framework (ASF) which enables the treatment of massive amounts of data in a distributed way. Results showed that the Pfizer vaccine is more popular and trustworthy than AstraZeneca. Additionally, according to the predictions generated by Long Short-Term Memory (LSTM) model, it is likely that Pfizer will continue to maintain its strong market position in the foreseeable future. These predictive analytics, which uses advanced machine learning techniques, have proven to be accurate in forecasting trends and identifying patterns in data. As such, we have confidence in the LSTM's prediction of Pfizer's ongoing dominance in the industry.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44725996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boban P. Bondzulic, Dimitrije Bujaković, Fangfang Li, V. Lukin
Single and three-channel images are widely used in numerous applications. Due to the increasing volume of such data, they must be compressed where lossy compression offers more opportunities. Usually, it is supposed that, for a given image, a larger compression ratio leads to worse quality of the compressed image according to all quality metrics. This is true for most practical cases. However, it has been found recently that images are called “strange” for which a rate-distortion curve like dependence of the peak signal-to-noise ratio on the quality factor or quantization step, behaves non-monotonously. This might cause problems in the lossy compression of images. Thus, the basic subject of this paper are the factors that determine this phenomenon. The main among them are artificial origin of an image, possible presence of large homogeneous regions, specific behavior of image histograms. The main goal of this paper is to consider and explain the peculiarities of the lossy compression of strange images. The tasks of this paper are to provide definitions of strange images and to check whether non-monotonicity of rate-distortion curves occurs for different coders and metrics. One more task is to put ideas and methodology forward of further studies intended to detect strange images before their compression. The main result is that non-monotonous behavior can be observed for the same image for several quality metrics and coders. This means that not the coder but image properties determine the probability of an image to being strange. Moreover, both grayscale and color images can be strange, and both the natural scene and artificial images can be strange. This depends more on image properties than on image origin and number of channels. In particular, the percentage of pixels that belong to large homogeneous regions and image entropy play an important role. As conclusions, we outline possible directions of future research that, in the first order, relate to the analysis of images in large databases to establish parameters that show that a given image can be considered as strange.
{"title":"On strange images with application to lossy image compression","authors":"Boban P. Bondzulic, Dimitrije Bujaković, Fangfang Li, V. Lukin","doi":"10.32620/reks.2022.4.11","DOIUrl":"https://doi.org/10.32620/reks.2022.4.11","url":null,"abstract":"Single and three-channel images are widely used in numerous applications. Due to the increasing volume of such data, they must be compressed where lossy compression offers more opportunities. Usually, it is supposed that, for a given image, a larger compression ratio leads to worse quality of the compressed image according to all quality metrics. This is true for most practical cases. However, it has been found recently that images are called “strange” for which a rate-distortion curve like dependence of the peak signal-to-noise ratio on the quality factor or quantization step, behaves non-monotonously. This might cause problems in the lossy compression of images. Thus, the basic subject of this paper are the factors that determine this phenomenon. The main among them are artificial origin of an image, possible presence of large homogeneous regions, specific behavior of image histograms. The main goal of this paper is to consider and explain the peculiarities of the lossy compression of strange images. The tasks of this paper are to provide definitions of strange images and to check whether non-monotonicity of rate-distortion curves occurs for different coders and metrics. One more task is to put ideas and methodology forward of further studies intended to detect strange images before their compression. The main result is that non-monotonous behavior can be observed for the same image for several quality metrics and coders. This means that not the coder but image properties determine the probability of an image to being strange. Moreover, both grayscale and color images can be strange, and both the natural scene and artificial images can be strange. This depends more on image properties than on image origin and number of channels. In particular, the percentage of pixels that belong to large homogeneous regions and image entropy play an important role. As conclusions, we outline possible directions of future research that, in the first order, relate to the analysis of images in large databases to establish parameters that show that a given image can be considered as strange.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43922254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject of this study is the cyber vulnerability of wind generators, as part of the cyberphysical system of intelligent power supply networks, Smart Grid. Wind generators produce electricity for further distribution in the network between «smart» electricity consumers, which often include autonomous power systems in medical institutions, autonomous power supply of homes, charging stations for cars, etc. Wind generators operate in two aspects: in the physical and information space. Thus, a violation of the security of the information flow of a wind generator can affect the physical performance of electricity generation, and disable equipment. The study aims to identify types of cyber threats in the wind generator network based on the analysis of known attack incidents, analysis of the Smart Grid network structure, network devices, protocols, and control mechanisms of a wind generator. The tasks of the work are: review and analyze known cyberattack incidents; review the classification of cyber threats to wind farms; consider the most common methods of attacks on the cyberphysical system of wind farms; consider ways of intrusions into the information flow of the cyberphysical system wind generator; consider resilience mechanisms of wind generators in case of a cyberattack, consider the directions of further research. The methods are a systematic approach that provides a comprehensive study of the problem, quantitative and qualitative analysis of incidents of cyber attacks on wind generators, and methods of attacks. The following results were obtained: 11 large-scale known incidents of cyber attacks on the cyberphysical systems of the energy sector and smart power supply networks were analyzed, and information flow features and structure of the wind generators were considered. Main communication interfaces of the Smart Grid network were reviewed, control mechanisms for the physical parts of the wind generator system such as automatic voltage regulator, and automatic generation control were observed, vulnerable data transmission protocols, DNP3 in particular, were analyzed, possible consequences in the case of a cyber-intrusion into the network were considered. Conclusions: wind farms, as part of the Smart Grid system, are a convenient target for cyberattacks, as the number of potential ways to interfere with the information flow of the cyberphysical system is growing due to an increase in the number of sensors, communication channels in the network. This is especially important for the further development of wind farm security systems, which at the time, are not able to provide high accuracy of intrusion detection into the information flow.
{"title":"Smart Grid and wind generators: an overview of cyber threats and vulnerabilities of power supply networks","authors":"Ihor Fursov, Klym Yamkovyi, Oleksandr Shmatko","doi":"10.32620/reks.2022.4.04","DOIUrl":"https://doi.org/10.32620/reks.2022.4.04","url":null,"abstract":"The subject of this study is the cyber vulnerability of wind generators, as part of the cyberphysical system of intelligent power supply networks, Smart Grid. Wind generators produce electricity for further distribution in the network between «smart» electricity consumers, which often include autonomous power systems in medical institutions, autonomous power supply of homes, charging stations for cars, etc. Wind generators operate in two aspects: in the physical and information space. Thus, a violation of the security of the information flow of a wind generator can affect the physical performance of electricity generation, and disable equipment. The study aims to identify types of cyber threats in the wind generator network based on the analysis of known attack incidents, analysis of the Smart Grid network structure, network devices, protocols, and control mechanisms of a wind generator. The tasks of the work are: review and analyze known cyberattack incidents; review the classification of cyber threats to wind farms; consider the most common methods of attacks on the cyberphysical system of wind farms; consider ways of intrusions into the information flow of the cyberphysical system wind generator; consider resilience mechanisms of wind generators in case of a cyberattack, consider the directions of further research. The methods are a systematic approach that provides a comprehensive study of the problem, quantitative and qualitative analysis of incidents of cyber attacks on wind generators, and methods of attacks. The following results were obtained: 11 large-scale known incidents of cyber attacks on the cyberphysical systems of the energy sector and smart power supply networks were analyzed, and information flow features and structure of the wind generators were considered. Main communication interfaces of the Smart Grid network were reviewed, control mechanisms for the physical parts of the wind generator system such as automatic voltage regulator, and automatic generation control were observed, vulnerable data transmission protocols, DNP3 in particular, were analyzed, possible consequences in the case of a cyber-intrusion into the network were considered. Conclusions: wind farms, as part of the Smart Grid system, are a convenient target for cyberattacks, as the number of potential ways to interfere with the information flow of the cyberphysical system is growing due to an increase in the number of sensors, communication channels in the network. This is especially important for the further development of wind farm security systems, which at the time, are not able to provide high accuracy of intrusion detection into the information flow.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44884443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject of the study: it is proposed to develop a method of image encryption with pixel permutation implemented using fuzzy logic and Hainaut mapping, as well as diffusion, which is implemented using the Lorenz system. Study objectives: To propose an effective way to apply the rules of fuzzy logic in relation to the values generated by the Henon mapping to implement the permutation of pixels in the image, which will provide a random permutation and increase the efficiency of the encryption method. Also, to achieve better security in the process of image encryption, the use of the diffusion process implemented using the Lorenz system. In addition, to increase the sensitivity of the encryption method to change the initial value of the component colors of the pixels will also be used in the encryption process. Investigation methods and research results: developed and presented a method of image encryption with pixel permutation implemented using fuzzy logic and Henon mapping, as well as diffusion, implemented using the Lorenz system. The initial values for the Henon mapping and the Lorenz system will be determined from the entered keyword, and the control parameters are set by the operator, while the values of the component colors of the pixels will also participate in the encryption process. In addition, before the process of rearranging the pixels in the image, the rules of fuzzy logic are implemented by Henon mapping. Also, the values of the component pixels before and after the diffusion procedure will be reduced to a single interval. Thus, as a result of image encryption, the original image changes completely, loses its content and shape, and the color intensity distribution of pixels becomes uniform. The program implementation of the proposed encryption method was also carried out and the qualitative characteristics of the proposed image encryption method were evaluated, namely: analysis of histograms of original and encrypted images, correlation of adjacent image pixels, root mean square error (MSE), peak signal-to-noise ratio (PSNR), entropy before changing the color components of the pixels. Conclusions: the implementation of the method has shown that it has a large number of encryption keys, which makes brute force (the process of their selection) resource-intensive and complex, and the implementation of the encryption process in two stages and using two different chaotic systems significantly improves the security of the encrypted image. The resulting cryptosystem is also resistant to the following attacks: approximation of chaotic orbits, correlation, analytical and statistical attacks.
{"title":"Method of encrypting images based on two multidimensional chaotic systems using fuzzy logic","authors":"M. Kushnir, Hryhorii Kosovan, Petro Kroialo","doi":"10.32620/reks.2022.4.09","DOIUrl":"https://doi.org/10.32620/reks.2022.4.09","url":null,"abstract":"The subject of the study: it is proposed to develop a method of image encryption with pixel permutation implemented using fuzzy logic and Hainaut mapping, as well as diffusion, which is implemented using the Lorenz system. Study objectives: To propose an effective way to apply the rules of fuzzy logic in relation to the values generated by the Henon mapping to implement the permutation of pixels in the image, which will provide a random permutation and increase the efficiency of the encryption method. Also, to achieve better security in the process of image encryption, the use of the diffusion process implemented using the Lorenz system. In addition, to increase the sensitivity of the encryption method to change the initial value of the component colors of the pixels will also be used in the encryption process. Investigation methods and research results: developed and presented a method of image encryption with pixel permutation implemented using fuzzy logic and Henon mapping, as well as diffusion, implemented using the Lorenz system. The initial values for the Henon mapping and the Lorenz system will be determined from the entered keyword, and the control parameters are set by the operator, while the values of the component colors of the pixels will also participate in the encryption process. In addition, before the process of rearranging the pixels in the image, the rules of fuzzy logic are implemented by Henon mapping. Also, the values of the component pixels before and after the diffusion procedure will be reduced to a single interval. Thus, as a result of image encryption, the original image changes completely, loses its content and shape, and the color intensity distribution of pixels becomes uniform. The program implementation of the proposed encryption method was also carried out and the qualitative characteristics of the proposed image encryption method were evaluated, namely: analysis of histograms of original and encrypted images, correlation of adjacent image pixels, root mean square error (MSE), peak signal-to-noise ratio (PSNR), entropy before changing the color components of the pixels. Conclusions: the implementation of the method has shown that it has a large number of encryption keys, which makes brute force (the process of their selection) resource-intensive and complex, and the implementation of the encryption process in two stages and using two different chaotic systems significantly improves the security of the encrypted image. The resulting cryptosystem is also resistant to the following attacks: approximation of chaotic orbits, correlation, analytical and statistical attacks.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48769769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumon Kumar Hazra, Romana Rahman Ema, S. Galib, Shalauddin Kabir, Nasim Adnan
Subject matter: Speech emotion recognition (SER) is an ongoing interesting research topic. Its purpose is to establish interactions between humans and computers through speech and emotion. To recognize speech emotions, five deep learning models: Convolution Neural Network, Long-Short Term Memory, Artificial Neural Network, Multi-Layer Perceptron, Merged CNN, and LSTM Network (CNN-LSTM) are used in this paper. The Toronto Emotional Speech Set (TESS), Surrey Audio-Visual Expressed Emotion (SAVEE) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets were used for this system. They were trained by merging 3 ways TESS+SAVEE, TESS+RAVDESS, and TESS+SAVEE+RAVDESS. These datasets are numerous audios spoken by both male and female speakers of the English language. This paper classifies seven emotions (sadness, happiness, anger, fear, disgust, neutral, and surprise) that is a challenge to identify seven emotions for both male and female data. Whereas most have worked with male-only or female-only speech and both male-female datasets have found low accuracy in emotion detection tasks. Features need to be extracted by a feature extraction technique to train a deep-learning model on audio data. Mel Frequency Cepstral Coefficients (MFCCs) extract all the necessary features from the audio data for speech emotion classification. After training five models with three datasets, the best accuracy of 84.35 % is achieved by CNN-LSTM with the TESS+SAVEE dataset.
{"title":"Emotion recognition of human speech using deep learning method and MFCC features","authors":"Sumon Kumar Hazra, Romana Rahman Ema, S. Galib, Shalauddin Kabir, Nasim Adnan","doi":"10.32620/reks.2022.4.13","DOIUrl":"https://doi.org/10.32620/reks.2022.4.13","url":null,"abstract":"Subject matter: Speech emotion recognition (SER) is an ongoing interesting research topic. Its purpose is to establish interactions between humans and computers through speech and emotion. To recognize speech emotions, five deep learning models: Convolution Neural Network, Long-Short Term Memory, Artificial Neural Network, Multi-Layer Perceptron, Merged CNN, and LSTM Network (CNN-LSTM) are used in this paper. The Toronto Emotional Speech Set (TESS), Surrey Audio-Visual Expressed Emotion (SAVEE) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets were used for this system. They were trained by merging 3 ways TESS+SAVEE, TESS+RAVDESS, and TESS+SAVEE+RAVDESS. These datasets are numerous audios spoken by both male and female speakers of the English language. This paper classifies seven emotions (sadness, happiness, anger, fear, disgust, neutral, and surprise) that is a challenge to identify seven emotions for both male and female data. Whereas most have worked with male-only or female-only speech and both male-female datasets have found low accuracy in emotion detection tasks. Features need to be extracted by a feature extraction technique to train a deep-learning model on audio data. Mel Frequency Cepstral Coefficients (MFCCs) extract all the necessary features from the audio data for speech emotion classification. After training five models with three datasets, the best accuracy of 84.35 % is achieved by CNN-LSTM with the TESS+SAVEE dataset.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44082658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sensitivity of the photodiode will depend on the amount of radiation power that it must register. The characteristics of a photodiode, as is known, are determined by its design. In particular, the characteristics of the material used, the configuration of electric fields, the mobility of charge carriers, the width of the SCR (space charge region), etc. Additionally, the characteristics of the photodiode are determined by the external applied voltage and the wavelength of the received optical radiation. In the case of its absorption only in the SCR (region of space charge) and at small distances around it, for example, in a p-i-n photodiode, its frequency characteristics will be determined mainly by the flight time of the generated charge carriers through the SCR. .The subject is to create an algorithm for building a photodiode, which must work at a certain wavelength, for example, at a wavelength of 0.95 μm. Silicon of the p-type conductivity with a specific resistance of at least 10 kΩ • cm was chosen as the starting material. The goal is to create a model and algorithm for developing a photodiode capable of providing maximum values of current monochromatic sensitivity due to the maximum collection of photogenerated charge carriers in its volume at the appropriate external bias. Task: To fulfill this requirement, theoretical and experimental research must be conducted. Methods: technological processes for manufacturing the proposed photodiode can be similar to the processes of forming planar p-i-n photodiodes based on silicon. The proposed technical solution determines the correlation between the area of collection of photogenerated charge carriers and the area of their generation. The result can be achieved by changing the design of the photodiode crystal, considering the obtained theoretical conclusions. Results: An analysis of the factors determining the current monochromatic sensitivity of the photodiode was carried out. A design of a photodiode with increased sensitivity compared to a serial photodiode of the FD - 309 type has been developed. Conclusions: The proposed calculation was used to estimate the sensitivity of the photodiode. The operating voltage and, accordingly, the width of the OPZ (area of space charge) W was chosen considering the absorption depth of the operating wavelength of 0.95 μm. The calculation shows that the current monochromatic sensitivity of such a photodiode can be increased to 0.57 A/W in contrast to the declared sensitivity of 0.5 A/W. Comparative studies of the produced batch of created photodiodes and FD - 309 photodiodes were conducted, which showed that the proposed photodiode really has a current monochromatic sensitivity at a wavelength of 0.95 μm not less than 0.55 A/W. Simultaneously, its rise time is reduced from 50 ns to 10 ns, and the capacitance is 90 pF instead of 100 pF in FD - 309.
光电二极管的灵敏度将取决于它必须记录的辐射功率的大小。众所周知,光电二极管的特性是由它的设计决定的。特别是所用材料的特性、电场的结构、载流子的迁移率、SCR(空间电荷区)的宽度等。此外,光电二极管的特性由外部施加的电压和接收到的光辐射的波长决定。在其仅在SCR(空间电荷区)和周围小距离内吸收的情况下,例如在p-i-n光电二极管中,其频率特性将主要由产生的载流子通过SCR的飞行时间决定。本课题是建立一个必须在特定波长下工作的光电二极管的算法,例如在0.95 μm的波长下。选择比电阻至少为10 kΩ•cm的p型导电性硅作为起始材料。目标是创建一个模型和算法,用于开发能够提供电流单色灵敏度最大值的光电二极管,因为在适当的外部偏置下,其体积中最大的光电生成载流子集合。任务:为了满足这一要求,必须进行理论和实验研究。方法:制造所提出的光电二极管的工艺过程可以类似于基于硅的平面p-i-n光电二极管的工艺过程。所提出的技术解决方案确定了光生载流子的收集面积与其生成面积之间的相关性。结合已得到的理论结论,可以通过改变光电二极管晶体的设计来实现这一结果。结果:对影响光电二极管电流单色灵敏度的因素进行了分析。与FD - 309型串行光电二极管相比,设计了一种灵敏度更高的光电二极管。结论:所提出的计算方法可用于估计光电二极管的灵敏度。考虑工作波长的吸收深度为0.95 μm,选择工作电压和空间电荷区宽度W。计算表明,该光电二极管的电流单色灵敏度可从声明的0.5 a /W提高到0.57 a /W。将所制备的光电二极管与FD - 309光电二极管进行了对比研究,结果表明,所制备的光电二极管在0.95 μm波长范围内具有不小于0.55 a /W的电流单色灵敏度。同时,它的上升时间从50ns减少到10ns,电容从FD - 309的100pf减少到90pf。
{"title":"Model and algorithm of creation of silicon photodiod with high sensitivity in the middle infrared area of the spectrum","authors":"Y. Dobrovolsky, Yurii Sorokatyi","doi":"10.32620/reks.2022.4.07","DOIUrl":"https://doi.org/10.32620/reks.2022.4.07","url":null,"abstract":"The sensitivity of the photodiode will depend on the amount of radiation power that it must register. The characteristics of a photodiode, as is known, are determined by its design. In particular, the characteristics of the material used, the configuration of electric fields, the mobility of charge carriers, the width of the SCR (space charge region), etc. Additionally, the characteristics of the photodiode are determined by the external applied voltage and the wavelength of the received optical radiation. In the case of its absorption only in the SCR (region of space charge) and at small distances around it, for example, in a p-i-n photodiode, its frequency characteristics will be determined mainly by the flight time of the generated charge carriers through the SCR. .The subject is to create an algorithm for building a photodiode, which must work at a certain wavelength, for example, at a wavelength of 0.95 μm. Silicon of the p-type conductivity with a specific resistance of at least 10 kΩ • cm was chosen as the starting material. The goal is to create a model and algorithm for developing a photodiode capable of providing maximum values of current monochromatic sensitivity due to the maximum collection of photogenerated charge carriers in its volume at the appropriate external bias. Task: To fulfill this requirement, theoretical and experimental research must be conducted. Methods: technological processes for manufacturing the proposed photodiode can be similar to the processes of forming planar p-i-n photodiodes based on silicon. The proposed technical solution determines the correlation between the area of collection of photogenerated charge carriers and the area of their generation. The result can be achieved by changing the design of the photodiode crystal, considering the obtained theoretical conclusions. Results: An analysis of the factors determining the current monochromatic sensitivity of the photodiode was carried out. A design of a photodiode with increased sensitivity compared to a serial photodiode of the FD - 309 type has been developed. Conclusions: The proposed calculation was used to estimate the sensitivity of the photodiode. The operating voltage and, accordingly, the width of the OPZ (area of space charge) W was chosen considering the absorption depth of the operating wavelength of 0.95 μm. The calculation shows that the current monochromatic sensitivity of such a photodiode can be increased to 0.57 A/W in contrast to the declared sensitivity of 0.5 A/W. Comparative studies of the produced batch of created photodiodes and FD - 309 photodiodes were conducted, which showed that the proposed photodiode really has a current monochromatic sensitivity at a wavelength of 0.95 μm not less than 0.55 A/W. Simultaneously, its rise time is reduced from 50 ns to 10 ns, and the capacitance is 90 pF instead of 100 pF in FD - 309.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47787183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serhii Krivtsov, I. Meniailov, K. Bazilevych, D. Chumachenko
The COVID-19 pandemic, which has been going on for almost three years, has shown that public health systems are not ready for such a challenge. Measures taken by governments in the healthcare sector in the context of a sharp increase in the pressure on it include containment of the transmission and spread of the virus, providing sufficient space for medical care, ensuring the availability of testing facilities and medical care, and mobilizing and retraining medical personnel. The pandemic has changed government and business processes, digitalizing the economy and healthcare. Global challenges have stimulated data-driven medicine research. Forecasting the epidemic process of infectious processes would make it possible to assess the scale of the impending pandemic to plan the necessary control measures. The study builds a model of the COVID-19 epidemic process to predict its dynamics based on neural networks. The target of the research is the infectious diseases epidemic process in the example of COVID-19. The research subjects are the methods and models of epidemic process simulation based on neural networks. As a result of this research, a simulation model of COVID-19 epidemic process based on a neural network was built. The model showed high accuracy: from 93.11% to 93.96% for Germany, from 95.53% to 95.54% for Japan, from 97.49% to 98.43% for South Korea, from 93.34% up to 94.18% for Ukraine, depending on the forecasting period. The assessment of absolute errors confirms that the model can be used in healthcare practice to develop control measures to contain the COVID-19 pandemic. The respective contribution of this research is two-fold. Firstly, the development of models based on the neural network approach will allow estimate the accuracy of such methods applied to the simulation of the COVID-19 epidemic process. Secondly, an investigation of the experimental study with a developed model applied to data from four countries will contribute to empirical evaluation of the effectiveness of its application not only to COVID-19 but also to other infectious diseases simulations. Conclusions. The research’s significance lies in the fact that automated decision support systems for epidemiologists and other public health workers can improve the efficiency of making anti-epidemic decisions. This study is especially relevant in the context of the escalation of the Russian war in Ukraine when the healthcare system's resources are limited.
{"title":"Predictive model of COVID-19 epidemic process based on neural network","authors":"Serhii Krivtsov, I. Meniailov, K. Bazilevych, D. Chumachenko","doi":"10.32620/reks.2022.4.01","DOIUrl":"https://doi.org/10.32620/reks.2022.4.01","url":null,"abstract":"The COVID-19 pandemic, which has been going on for almost three years, has shown that public health systems are not ready for such a challenge. Measures taken by governments in the healthcare sector in the context of a sharp increase in the pressure on it include containment of the transmission and spread of the virus, providing sufficient space for medical care, ensuring the availability of testing facilities and medical care, and mobilizing and retraining medical personnel. The pandemic has changed government and business processes, digitalizing the economy and healthcare. Global challenges have stimulated data-driven medicine research. Forecasting the epidemic process of infectious processes would make it possible to assess the scale of the impending pandemic to plan the necessary control measures. The study builds a model of the COVID-19 epidemic process to predict its dynamics based on neural networks. The target of the research is the infectious diseases epidemic process in the example of COVID-19. The research subjects are the methods and models of epidemic process simulation based on neural networks. As a result of this research, a simulation model of COVID-19 epidemic process based on a neural network was built. The model showed high accuracy: from 93.11% to 93.96% for Germany, from 95.53% to 95.54% for Japan, from 97.49% to 98.43% for South Korea, from 93.34% up to 94.18% for Ukraine, depending on the forecasting period. The assessment of absolute errors confirms that the model can be used in healthcare practice to develop control measures to contain the COVID-19 pandemic. The respective contribution of this research is two-fold. Firstly, the development of models based on the neural network approach will allow estimate the accuracy of such methods applied to the simulation of the COVID-19 epidemic process. Secondly, an investigation of the experimental study with a developed model applied to data from four countries will contribute to empirical evaluation of the effectiveness of its application not only to COVID-19 but also to other infectious diseases simulations. Conclusions. The research’s significance lies in the fact that automated decision support systems for epidemiologists and other public health workers can improve the efficiency of making anti-epidemic decisions. This study is especially relevant in the context of the escalation of the Russian war in Ukraine when the healthcare system's resources are limited.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45124932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Igor Kononenko, Maximilien Kpodjedo, Andrii Morhun, Maksym Oliinyk
The choice of a project portfolio management approach has a significant impact on the effectiveness of the organization. However, each organization that carries out project activities must not only choose a project portfolio management approach, but also the degree to which its capabilities are used. This degree determines an organization's level of maturity in project portfolio management. There are many models of maturity known. The use of such models often involves a long study and costs organizations a lot of money. The paper is aimed at creating an information technology for choosing a project portfolio management approach and the optimal level of maturity of the organization in the field of project portfolio management. This information technology is created and presented in the form of IDEF0 diagram. Using information about the organization and the environment, experts can investigate the application of different alternative project portfolio management approaches. When performing this analysis, they use the Project Portfolio Management Approach Selection Method and the Organizational Maturity Level Selection Method for Portfolio Management. Using the first of these methods, experts can select the most appropriate approach based on two criteria: the risks from non-performance or imperfect performance of the processes of the generalized portfolio management process table and the cost of performing the approach processes. The second method is used to assess an organization's level of maturity in portfolio management and select the optimal level of maturity. Information technology is based on two developed applications. The first solution is designed to select the project portfolio management approach. The second application solves the problem of choosing the level of maturity of the organization in the field of portfolio management Projects. The applications have an intuitive interface. Both applications have been tested and are ready for use. Information technology is intended for use by project portfolio managers.
{"title":"Information technology for choosing the project portfolio management approach and the optimal level of maturity of an organization","authors":"Igor Kononenko, Maximilien Kpodjedo, Andrii Morhun, Maksym Oliinyk","doi":"10.32620/reks.2022.4.14","DOIUrl":"https://doi.org/10.32620/reks.2022.4.14","url":null,"abstract":"The choice of a project portfolio management approach has a significant impact on the effectiveness of the organization. However, each organization that carries out project activities must not only choose a project portfolio management approach, but also the degree to which its capabilities are used. This degree determines an organization's level of maturity in project portfolio management. There are many models of maturity known. The use of such models often involves a long study and costs organizations a lot of money. The paper is aimed at creating an information technology for choosing a project portfolio management approach and the optimal level of maturity of the organization in the field of project portfolio management. This information technology is created and presented in the form of IDEF0 diagram. Using information about the organization and the environment, experts can investigate the application of different alternative project portfolio management approaches. When performing this analysis, they use the Project Portfolio Management Approach Selection Method and the Organizational Maturity Level Selection Method for Portfolio Management. Using the first of these methods, experts can select the most appropriate approach based on two criteria: the risks from non-performance or imperfect performance of the processes of the generalized portfolio management process table and the cost of performing the approach processes. The second method is used to assess an organization's level of maturity in portfolio management and select the optimal level of maturity. Information technology is based on two developed applications. The first solution is designed to select the project portfolio management approach. The second application solves the problem of choosing the level of maturity of the organization in the field of portfolio management Projects. The applications have an intuitive interface. Both applications have been tested and are ready for use. Information technology is intended for use by project portfolio managers.","PeriodicalId":36122,"journal":{"name":"Radioelectronic and Computer Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42832818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}