In many decisionsupport systemsthere are processedchaotic spatial-time processes which are non-separable and quasi-periodic. Some examples of such systemsareepidemic spreading, population development, fire spreading, radio wave signals, image processing, information encryption, radio vision, etc. Processes in these systems have periodic character, e.g. seasonal fluctuations(epidemic spreading, population development), harmonic fluctuations (pattern recognition, image processing),etc. In simulation block the existing systems use separable process models which are presented as multiplication of spatialand temporal parts and are linearized. This significantly reduces the quality of spatial-time non-separable processes. The quality model building of chaotic spa-tial-time non-separable processwhich is processed by decisionsupport systemis necessary for getting of learning set. Itis really complicated especially if the random process is formed. The implementation ensemble of chaotic spatial-time non-separable process requires high costs what causes reduction of the system efficiency. Moreover, in many cases the implementation ensemble of spatial-time processes is impossible to get. In this workthemathematical model of a quasi-periodic spatial-time non-separable process has been developed. Based on it the formation method of this process has been developed and investigated. The epidemic spreading pro-cessed was presented as an example
{"title":"Analysis of quasi-periodic space-time non-separable processes to support decision-making in medical monitoring systems","authors":"O. D. Franzheva","doi":"10.15276/hait.03.2021.2","DOIUrl":"https://doi.org/10.15276/hait.03.2021.2","url":null,"abstract":"In many decisionsupport systemsthere are processedchaotic spatial-time processes which are non-separable and quasi-periodic. Some examples of such systemsareepidemic spreading, population development, fire spreading, radio wave signals, image processing, information encryption, radio vision, etc. Processes in these systems have periodic character, e.g. seasonal fluctuations(epidemic spreading, population development), harmonic fluctuations (pattern recognition, image processing),etc. In simulation block the existing systems use separable process models which are presented as multiplication of spatialand temporal parts and are linearized. This significantly reduces the quality of spatial-time non-separable processes. The quality model building of chaotic spa-tial-time non-separable processwhich is processed by decisionsupport systemis necessary for getting of learning set. Itis really complicated especially if the random process is formed. The implementation ensemble of chaotic spatial-time non-separable process requires high costs what causes reduction of the system efficiency. Moreover, in many cases the implementation ensemble of spatial-time processes is impossible to get. In this workthemathematical model of a quasi-periodic spatial-time non-separable process has been developed. Based on it the formation method of this process has been developed and investigated. The epidemic spreading pro-cessed was presented as an example","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122295152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of the research is to reduce the frame processing time for face segmentation on videos on mobile devices using deep learning technologies. The paper analyzes the advantages and disadvantages of existing segmentation methods, as well as their applicability to various tasks. The existing real-time realizations of face segmentation in the most popular mobile applications, which provide the functionality for adding visual effects to videos, were compared. As a result, it was determined that the classical segmentation methods do not have a suitable combination of accuracy and speed, and require manual tuning for a particular task, while the neural network-based segmentation methods determine the deep features automatically and have high accuracy with an acceptable speed. The method based on convolutional neural networks is chosen for use because, in addition to the advantages of other methods based on neural networks, it does not require such a significant amount of computing resources during its execution. A review of existing convolutional neural networks for segmentation was held, based on which the DeepLabV3+ network was chosen as having sufficiently high accuracy and being optimized for work on mobile devices. Modifications were made to the structure of the selected network to match the task of two classes segmentation and to speed up the work on devices with low performance. 8-bit quantization was applied to the values processed by the network for further acceleration. The network was adapted to the task of face segmentation by transfer learning performed on a set of face images from the COCO dataset. Based on the modified and additionally trained segmentation model, a mobile app was created to record video with real-time visual effects, which applies segmentation to separately add effects on two zones - the face (color filters, brightness adjustment, animated effects) and the background (blurring, hiding, replacement with another image). The time of frames processing in the application was tested on mobile devices with different technical characteristics. We analyzed the differences in testing results for segmentation using the obtained model and segmentation using the normalized cuts method. The comparison reveals a decrease of frame processing time on the majority of devices with a slight decrease of segmentation accuracy.
{"title":"DEEP LEARNING TECHNOLOGY FOR VIDEOFRAME PROCESSING IN FACE SEGMENTATION ON MOBILE DEVICES","authors":"V. Ruvinskaya, Yurii Yu. Timkov","doi":"10.15276/hait.02.2021.7","DOIUrl":"https://doi.org/10.15276/hait.02.2021.7","url":null,"abstract":"The aim of the research is to reduce the frame processing time for face segmentation on videos on mobile devices using deep learning technologies. The paper analyzes the advantages and disadvantages of existing segmentation methods, as well as their applicability to various tasks. The existing real-time realizations of face segmentation in the most popular mobile applications, which provide the functionality for adding visual effects to videos, were compared. As a result, it was determined that the classical segmentation methods do not have a suitable combination of accuracy and speed, and require manual tuning for a particular task, while the neural network-based segmentation methods determine the deep features automatically and have high accuracy with an acceptable speed. The method based on convolutional neural networks is chosen for use because, in addition to the advantages of other methods based on neural networks, it does not require such a significant amount of computing resources during its execution. A review of existing convolutional neural networks for segmentation was held, based on which the DeepLabV3+ network was chosen as having sufficiently high accuracy and being optimized for work on mobile devices. Modifications were made to the structure of the selected network to match the task of two classes segmentation and to speed up the work on devices with low performance. 8-bit quantization was applied to the values processed by the network for further acceleration. The network was adapted to the task of face segmentation by transfer learning performed on a set of face images from the COCO dataset. Based on the modified and additionally trained segmentation model, a mobile app was created to record video with real-time visual effects, which applies segmentation to separately add effects on two zones - the face (color filters, brightness adjustment, animated effects) and the background (blurring, hiding, replacement with another image). The time of frames processing in the application was tested on mobile devices with different technical characteristics. We analyzed the differences in testing results for segmentation using the obtained model and segmentation using the normalized cuts method. The comparison reveals a decrease of frame processing time on the majority of devices with a slight decrease of segmentation accuracy.","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130188483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oleksandr Martynyuk, O. V. Drozd, Sergiy Nesterenko, Vadym Yu. Skobtsov, Thuong Van Bui
The introduction of new energy-consuming properties for positions and transitions into the checked properties of the extended reference Petri net, for which the deviations of the tested Petri net are determined and a testing model is developed, provides new diagnostic possibilities. Keeping the class of checked properties in the composition of deviations of incidence relations, correspondences and marking functions of positions and transitions for the checked and reference Petri nets, the new properties make it possible to record the appearance of critical temperature regimes that are a consequence of errors or directly leading to their appearance. This versatility of testing helps to increase its completeness, accuracy and efficiency. The energy-heavy testing model is based on verification of incidence, correspondence, and markup functions. Checking the markup functions when generating events in positions, performing actions in transitions, as well as the proposed checking of the energy consumption indicators accumulated in the monitor tokens, is performed when checking the incidence, correspondences. The features of the testing model include the input of generalized energy-loaded Petri nets recorders, accumulating information about energy consumption in the behavior of positions/transitions, topological components and subnets, the entire Petri net in the process of its functioning. The testing model is also distinguished by the recognition of the reference energy-loaded behavior when checking the Petri net based on behavioral identification and coincidence of subsets of positions/transitions, the determination of behavior, the use of check primitives and transactions. The behavioral testing model defines the formal conditions for behavioral testing procedures, including the analysis of the correctness of energy consumption. The dimensionality of the testing model was estimated using the representation of Petri net graphs, special graphs of attainable states, including Rabin-Scott automata, using list structures. These estimates define the limits of applicability of the formal testing model
{"title":"BEHAVIORAL HIDDEN TESTING OF DISTRIBUTED INFORMATION SYSTEMS TAKING INTO ACCOUNT OF ENERGY","authors":"Oleksandr Martynyuk, O. V. Drozd, Sergiy Nesterenko, Vadym Yu. Skobtsov, Thuong Van Bui","doi":"10.15276/hait.02.2021.3","DOIUrl":"https://doi.org/10.15276/hait.02.2021.3","url":null,"abstract":"The introduction of new energy-consuming properties for positions and transitions into the checked properties of the extended reference Petri net, for which the deviations of the tested Petri net are determined and a testing model is developed, provides new diagnostic possibilities. Keeping the class of checked properties in the composition of deviations of incidence relations, correspondences and marking functions of positions and transitions for the checked and reference Petri nets, the new properties make it possible to record the appearance of critical temperature regimes that are a consequence of errors or directly leading to their appearance. This versatility of testing helps to increase its completeness, accuracy and efficiency. The energy-heavy testing model is based on verification of incidence, correspondence, and markup functions. Checking the markup functions when generating events in positions, performing actions in transitions, as well as the proposed checking of the energy consumption indicators accumulated in the monitor tokens, is performed when checking the incidence, correspondences. The features of the testing model include the input of generalized energy-loaded Petri nets recorders, accumulating information about energy consumption in the behavior of positions/transitions, topological components and subnets, the entire Petri net in the process of its functioning. The testing model is also distinguished by the recognition of the reference energy-loaded behavior when checking the Petri net based on behavioral identification and coincidence of subsets of positions/transitions, the determination of behavior, the use of check primitives and transactions. The behavioral testing model defines the formal conditions for behavioral testing procedures, including the analysis of the correctness of energy consumption. The dimensionality of the testing model was estimated using the representation of Petri net graphs, special graphs of attainable states, including Rabin-Scott automata, using list structures. These estimates define the limits of applicability of the formal testing model","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123001016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The importance of the modeling mode in systems of computer visual pattern recognition is shown. The purpose of the mode is to determine the types of textures that are present on the images processed in intelligent diagnostic systems. Images processed in technical diagnostic systems contain texture regions, which can be represented by different types of textures - spectral, statistical and spectral-statistical. Texture identification methods, such as, statistical, spectral, expert, multifractal, which are used to identify and analyze texture images, have been analyzed. To determine texture regions on images that are of a combined spectral-statistical nature, a hybrid texture identification method has been developed which makes it possible to take into account the local characteristics of the texture based on multifractal indicators characterizing the non-stationarity and impulsite of the data and the sign of the spectral texture. The stages of the developed hybrid texture identification method are: preprocessing; formation of the primary features vector; formation of the secondary features vector. The formation of the primary features vector is performed for the selected rectangular fragment of the image, in which the multifractal features and the spectral texture feature are calculated. To reduce the feature space at the stage of formation of the secondary identification vector, the principal component method was used. An experimental study of the developed hybrid texture identification method textures on model images of spectral, statistical, spectralstatistical textures has been carried out. The results of the study showed that the developed method made it possible to increase the probability of correct determination of the region of the combined spectral-statistical texture. The developed identification method was tested on images from Brodatz album of textures and images of wear zones of cutting tools, which are processed in intelligent systems of technical diagnostics. The probability of correctly identifying areas of spectral-statistical texture in the images of wear zones of cutting tools averaged 0.9, which is sufficient for the needs of practice
{"title":"HYBRID TEXTURE IDENTIFICATION METHOD","authors":"Natalya Volkova, V. Krylov","doi":"10.15276/hait.02.2021.2","DOIUrl":"https://doi.org/10.15276/hait.02.2021.2","url":null,"abstract":"The importance of the modeling mode in systems of computer visual pattern recognition is shown. The purpose of the mode is to determine the types of textures that are present on the images processed in intelligent diagnostic systems. Images processed in technical diagnostic systems contain texture regions, which can be represented by different types of textures - spectral, statistical and spectral-statistical. Texture identification methods, such as, statistical, spectral, expert, multifractal, which are used to identify and analyze texture images, have been analyzed. To determine texture regions on images that are of a combined spectral-statistical nature, a hybrid texture identification method has been developed which makes it possible to take into account the local characteristics of the texture based on multifractal indicators characterizing the non-stationarity and impulsite of the data and the sign of the spectral texture. The stages of the developed hybrid texture identification method are: preprocessing; formation of the primary features vector; formation of the secondary features vector. The formation of the primary features vector is performed for the selected rectangular fragment of the image, in which the multifractal features and the spectral texture feature are calculated. To reduce the feature space at the stage of formation of the secondary identification vector, the principal component method was used. An experimental study of the developed hybrid texture identification method textures on model images of spectral, statistical, spectralstatistical textures has been carried out. The results of the study showed that the developed method made it possible to increase the probability of correct determination of the region of the combined spectral-statistical texture. The developed identification method was tested on images from Brodatz album of textures and images of wear zones of cutting tools, which are processed in intelligent systems of technical diagnostics. The probability of correctly identifying areas of spectral-statistical texture in the images of wear zones of cutting tools averaged 0.9, which is sufficient for the needs of practice","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134398560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article is devoted to the development of models and methods for detecting Zero-Day threats in cyberspace to improve the efficiency of detecting high-level malicious complexes that are using polymorphic mutators. The method for detecting samples by antivirus solutions using a public and local multiscanner is proposed. The method for diagnosing polymorphic malware using Yara rules is being developed. The multicomponent service that allows organizing a free malware analysis solution with a hybrid deployment architecture in public and private clouds is described. The cloud service for detecting malware based on open-source sandboxes and MAS, allowing horizontal scalability in hybrid clouds, and showing high capacity during malicious and non-malicious object processing is designed. The main task of the service is to collect artifacts after dynamic and static object analysis to detect zero-day threats. The effectiveness of the proposed solutions is shown. Scientific novelty and originality consist in the creation of the following methods: 1) detecting the sample by preinstalled antivirus solutions that allow static scanning in separate threads without requests restrictions for increasing the malware processing speed and restrict public access to confidential files; 2) diagnosing polymorphic malware using Yara rules, that allows detecting new modifications that are not detected by available solutions. The proposed hybrid system architecture allows to perform a retrospective search by families, tracking changes in destructive components, collect the malicious URLs database to block traffic to C&C servers, collect dropped and downloaded files, analyze phishing emails attachments, integrate with SIEM, IDS, IPS, antiphishing and Honeypot systems, improve the quality of the SOC analyst, decrease the incidents response times and block new threats that are not detected by available antivirus solutions. The practical significance of the results is in the cloud service development that combines MAS Sandbox and a modified distributed Cuckoo sandbox, which allows to respond to Zero-Day threats quickly, store a knowledge base for artifacts correlation between polymorphic malware samples, actively search for new malware samples and integrate with cyber protection hardware and software systems that support the Cuckoo API.
{"title":"MODELS AND METHODS FOR DIAGNOSING ZERO-DAY THREATS IN CYBERSPACE","authors":"Oleksandr S. Saprykin","doi":"10.15276/hait.02.2021.5","DOIUrl":"https://doi.org/10.15276/hait.02.2021.5","url":null,"abstract":"The article is devoted to the development of models and methods for detecting Zero-Day threats in cyberspace to improve the efficiency of detecting high-level malicious complexes that are using polymorphic mutators. The method for detecting samples by antivirus solutions using a public and local multiscanner is proposed. The method for diagnosing polymorphic malware using Yara rules is being developed. The multicomponent service that allows organizing a free malware analysis solution with a hybrid deployment architecture in public and private clouds is described. The cloud service for detecting malware based on open-source sandboxes and MAS, allowing horizontal scalability in hybrid clouds, and showing high capacity during malicious and non-malicious object processing is designed. The main task of the service is to collect artifacts after dynamic and static object analysis to detect zero-day threats. The effectiveness of the proposed solutions is shown. Scientific novelty and originality consist in the creation of the following methods: 1) detecting the sample by preinstalled antivirus solutions that allow static scanning in separate threads without requests restrictions for increasing the malware processing speed and restrict public access to confidential files; 2) diagnosing polymorphic malware using Yara rules, that allows detecting new modifications that are not detected by available solutions. The proposed hybrid system architecture allows to perform a retrospective search by families, tracking changes in destructive components, collect the malicious URLs database to block traffic to C&C servers, collect dropped and downloaded files, analyze phishing emails attachments, integrate with SIEM, IDS, IPS, antiphishing and Honeypot systems, improve the quality of the SOC analyst, decrease the incidents response times and block new threats that are not detected by available antivirus solutions. The practical significance of the results is in the cloud service development that combines MAS Sandbox and a modified distributed Cuckoo sandbox, which allows to respond to Zero-Day threats quickly, store a knowledge base for artifacts correlation between polymorphic malware samples, actively search for new malware samples and integrate with cyber protection hardware and software systems that support the Cuckoo API.","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114553556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents an approach to the design of technical systems, the elements of which are interconnected and carry out an internal exchange of energy. The above analysis showed that for heat-exchange equipment when combining devices into systems, only iterative methods are currently used, a representative of which is Pinch analysis. A limitation of the iterative approach is the impossibility of obtaining an exact solution to such problems, which can only be achieved by analytical methods, which also make it possible to reveal some effects in systems that are practically unavailable for numerical solution. This indicates the absence of a rigorous proof of the existence of a solution and a problem in the construction of approximate solutions, due to the need to involve complementary hypotheses. The topological representation of the system modules allows us to consider the architecture as a network, which contributes to the analysis of the connections between the constituent elements and the identification of their mutual influence. Highlighted the typical connections of network elements such as serial, parallel, contour, which allows to unify the principles of building connections in the system. As an optimality criterion, the NTU parameter was chosen, which includes the heat exchange surface and is usually used when searching for a solution for heat exchangers of moving objects. An analytical solution to the problem of flow distribution and energy exchange efficiency in a system of two series-connected heat exchangers is obtained. His analysis showed that the formulation of the design problem based on the definition of matrix elements in relation to determinants allows not only to meet the requirements for the system, but also to determine the design parameters of its elements that satisfy their extreme characteristics
{"title":"IMPROVING THE DESIGNING METHOD OF THERMAL NETWORKS: SERIAL CONNECTION OF STREAMS","authors":"Georgy Derevyanko, V. Mescheryakov","doi":"10.15276/hait.02.2021.4","DOIUrl":"https://doi.org/10.15276/hait.02.2021.4","url":null,"abstract":"The paper presents an approach to the design of technical systems, the elements of which are interconnected and carry out an internal exchange of energy. The above analysis showed that for heat-exchange equipment when combining devices into systems, only iterative methods are currently used, a representative of which is Pinch analysis. A limitation of the iterative approach is the impossibility of obtaining an exact solution to such problems, which can only be achieved by analytical methods, which also make it possible to reveal some effects in systems that are practically unavailable for numerical solution. This indicates the absence of a rigorous proof of the existence of a solution and a problem in the construction of approximate solutions, due to the need to involve complementary hypotheses. The topological representation of the system modules allows us to consider the architecture as a network, which contributes to the analysis of the connections between the constituent elements and the identification of their mutual influence. Highlighted the typical connections of network elements such as serial, parallel, contour, which allows to unify the principles of building connections in the system. As an optimality criterion, the NTU parameter was chosen, which includes the heat exchange surface and is usually used when searching for a solution for heat exchangers of moving objects. An analytical solution to the problem of flow distribution and energy exchange efficiency in a system of two series-connected heat exchangers is obtained. His analysis showed that the formulation of the design problem based on the definition of matrix elements in relation to determinants allows not only to meet the requirements for the system, but also to determine the design parameters of its elements that satisfy their extreme characteristics","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130633281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Morkun, I. Kotov, O. Serdiuk, Iryna A. Haponenko
The research deals with improving methods and systems of control over power systems based on intellectualization of dispatch decision support. There are results of developing a principal trigger scheme of the decision support system algorithm. The proposed model of algorithm visualization in the form of a trigger state network of the computer system provides interaction with power objects of mining and metallurgical complexes and regions. A new interpretation of components of the network trigger model is introduced. The model is interactively related to both user-operator actions and states of power system components. With that, the state of the automata model is associated with fulfillment a set of metarules to control the logical inference. There are new forms of presenting algorithms controlling knowledgebases that interact with the external environment and aggregate primitives of states, triggers and transactions of operations and generalize standard visualization languages of algorithms are proposed. This allows unification of smart systems interacting with the external environment. The authors develop models for representing knowledgebase processing algorithms interacting with power objects that combine states, triggers and transaction operations and generalize standard visualization languages of algorithms. This enables description of functioning database algorithms and their event model, which provides a reliable unification of smart systems interacting with control objects of mining and metallurgical power systems. The research solves the problem of building a knowledgebase and a software complex of the dispatch decision support system based on the data of computational experiments on the power system scheme. The research results indicate practical effectiveness of the proposed approaches and designed models
{"title":"PRODUCTION RULE ONTOLOGY OF AUTOMATIZED SMART EMERGENCY DISPATCHING SUPPORT OF THE POWER SYSTEM","authors":"V. Morkun, I. Kotov, O. Serdiuk, Iryna A. Haponenko","doi":"10.15276/hait.02.2021.6","DOIUrl":"https://doi.org/10.15276/hait.02.2021.6","url":null,"abstract":"The research deals with improving methods and systems of control over power systems based on intellectualization of dispatch decision support. There are results of developing a principal trigger scheme of the decision support system algorithm. The proposed model of algorithm visualization in the form of a trigger state network of the computer system provides interaction with power objects of mining and metallurgical complexes and regions. A new interpretation of components of the network trigger model is introduced. The model is interactively related to both user-operator actions and states of power system components. With that, the state of the automata model is associated with fulfillment a set of metarules to control the logical inference. There are new forms of presenting algorithms controlling knowledgebases that interact with the external environment and aggregate primitives of states, triggers and transactions of operations and generalize standard visualization languages of algorithms are proposed. This allows unification of smart systems interacting with the external environment. The authors develop models for representing knowledgebase processing algorithms interacting with power objects that combine states, triggers and transaction operations and generalize standard visualization languages of algorithms. This enables description of functioning database algorithms and their event model, which provides a reliable unification of smart systems interacting with control objects of mining and metallurgical power systems. The research solves the problem of building a knowledgebase and a software complex of the dispatch decision support system based on the data of computational experiments on the power system scheme. The research results indicate practical effectiveness of the proposed approaches and designed models","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132165284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Larshin, N. Lishchenko, O. Babiychuk, J. Pitel’
Information support for modern computer-aided design of products and processes is considered in this review in accordance with the methodology of the integrated CAD/CAM/CAE system. Three levels of the management hierarchy at the design and production stages are considered. At the top (organizational) level, computer-aided design of the product structure and its manufacturing technology is performed. At the middle (coordinating) level, a binding to existing technological equipment and debugging of individual fragments of the control program are performed. At the lower (executive) level, the control program is finally created, debugged and executed. A distinctive feature of the proposed automation methodology at the design and production stages is the use of feedback from the lower level to the middle and upper levels to correct the decisions made there, taking into account the existing management powers at these levels of the hierarchy. Thus, the indicated levels of the hierarchy of the intelligent system correspond to the hierarchy of objects and subjects of management and control, taking into account the powers (and capabilities) of management and control at each level. Information is a basic category not only in information (virtual) technology for its transformation and transmission, but also in physical technology of material production in the manufacture of a corresponding material product. Such technology as a rule, contain preparatory (pre-production) and executive (implementation) stages. At the preparatory stage, a virtual product is created (an information model of a real product in the form of virtual reality), and at the executive stage, a real (physical) product appears that has a use value (possession utility). This research describes the features of information processing at both stages of production in order to increase its efficiency.
{"title":"COMPUTER-AIDED DESIGN AND PRODUCTION INFORMATION SUPPORT","authors":"V. Larshin, N. Lishchenko, O. Babiychuk, J. Pitel’","doi":"10.15276/hait.02.2021.1","DOIUrl":"https://doi.org/10.15276/hait.02.2021.1","url":null,"abstract":"Information support for modern computer-aided design of products and processes is considered in this review in accordance with the methodology of the integrated CAD/CAM/CAE system. Three levels of the management hierarchy at the design and production stages are considered. At the top (organizational) level, computer-aided design of the product structure and its manufacturing technology is performed. At the middle (coordinating) level, a binding to existing technological equipment and debugging of individual fragments of the control program are performed. At the lower (executive) level, the control program is finally created, debugged and executed. A distinctive feature of the proposed automation methodology at the design and production stages is the use of feedback from the lower level to the middle and upper levels to correct the decisions made there, taking into account the existing management powers at these levels of the hierarchy. Thus, the indicated levels of the hierarchy of the intelligent system correspond to the hierarchy of objects and subjects of management and control, taking into account the powers (and capabilities) of management and control at each level. Information is a basic category not only in information (virtual) technology for its transformation and transmission, but also in physical technology of material production in the manufacture of a corresponding material product. Such technology as a rule, contain preparatory (pre-production) and executive (implementation) stages. At the preparatory stage, a virtual product is created (an information model of a real product in the form of virtual reality), and at the executive stage, a real (physical) product appears that has a use value (possession utility). This research describes the features of information processing at both stages of production in order to increase its efficiency.","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134305793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The work is a continuation of studies of the dynamic characteristics of thermoelectric coolers aimed at analyzing the influence of temperature differences, current operating modes, design parameters of the device and physical parameters of the material of thermoelements for a time constant. The article analyzes the effect of the heat sink capacity of the radiator on the dynamic characteristics, energy and reliability indicators of a single-stage thermoelectric cooler. A dynamic model of a thermoelectric cooler has been developed taking into account the weight and size parameters of the radiator, which relate the main energy indicators of the cooler with the heat removal capacity of the radiator, operating currents, the value of the heat load and the relative temperature difference. The analysis of the dynamic model shows that with an increase in the heat-removing capacity of the radiator at a given thermal load and various current modes, the main parameters of the cooler change. The required number of thermoelements, power consumption, time to reach a stationary mode, and relative failure rate are reduced. With an increase in the relative operating current, the time to reach the stationary mode of operation decreases for different values of the heat sink capacity of the radiator. It is shown that the minimum time to reach the stationary operating mode is provided in the maximum refrigerating capacity mode. The studies were carried out at different values of the heat sink capacity of the radiator in the operating range of temperature drops and the geometry of thermoelements. The possibility of minimizing the heat-dissipating surface of the radiator at various current operating modes and the relationship with the main parameters, reliability indicators and the time to reach the stationary operating mode are shown. Comparative analysis of weight and size characteristics, main parameters, reliability indicators and dynamics of functioning with rational design makes it possible to choose compromise solutions, taking into account the weight of each of the limiting factors.
{"title":"THERMAL CONTROL OF THERMOELECTRIC COOLING DEVICES OF TRANSMISSION AND RECEIVING ELEMENTS OF ON-BOARD INFORMATION SYSTEMS","authors":"V. Mescheryakov, V. Zaykov, Y. Zhuravlov","doi":"10.15276/hait.04.2020.5","DOIUrl":"https://doi.org/10.15276/hait.04.2020.5","url":null,"abstract":"The work is a continuation of studies of the dynamic characteristics of thermoelectric coolers aimed at analyzing the influence of temperature differences, current operating modes, design parameters of the device and physical parameters of the material of thermoelements for a time constant. The article analyzes the effect of the heat sink capacity of the radiator on the dynamic characteristics, energy and reliability indicators of a single-stage thermoelectric cooler. A dynamic model of a thermoelectric cooler has been developed taking into account the weight and size parameters of the radiator, which relate the main energy indicators of the cooler with the heat removal capacity of the radiator, operating currents, the value of the heat load and the relative temperature difference. The analysis of the dynamic model shows that with an increase in the heat-removing capacity of the radiator at a given thermal load and various current modes, the main parameters of the cooler change. The required number of thermoelements, power consumption, time to reach a stationary mode, and relative failure rate are reduced. With an increase in the relative operating current, the time to reach the stationary mode of operation decreases for different values of the heat sink capacity of the radiator. It is shown that the minimum time to reach the stationary operating mode is provided in the maximum refrigerating capacity mode. The studies were carried out at different values of the heat sink capacity of the radiator in the operating range of temperature drops and the geometry of thermoelements. The possibility of minimizing the heat-dissipating surface of the radiator at various current operating modes and the relationship with the main parameters, reliability indicators and the time to reach the stationary operating mode are shown. Comparative analysis of weight and size characteristics, main parameters, reliability indicators and dynamics of functioning with rational design makes it possible to choose compromise solutions, taking into account the weight of each of the limiting factors.","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115034260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents results of applying the KyFan norm in the context of solving the problem of video segmentation. Since the task of video analysis can be considered as analysis of the sequence of images, it was decided to find a way to formalize the description of the video frame using the mathematical apparatus of non-square matrices. When choosing a method, particular attention was paid precisely to universality with respect to the dimension of the initial data due to the technical characteristics and nature of the video data -video frames are matrices of arbitrary dimension. The ability to skip the step of matrix transformation to square dimension, or vectorization using some descriptor allows you to reduce computational costsrequired for this transformation. It was decided to use the value of the Ky Fan norm as an image descriptor, since it is built on top of matrix singular values. As it is known, singular values are calculated during the singular decomposition of the matrix and can be used, among other features, to reduce the dimension of the source data. A singular decomposition does not impose restrictions on either the dimension or the character of the elements of the original matrix. In addition, it can be used to derive other matrix decompositions with required characteristics. A comparative analysis of the effectiveness of the obtained descriptor was carried out in the case of using the k-norm and 1-norm, which showed that the 1-norm allows us to identify the most significant changes in the scene, while k -norm is able to detect minor. In other words, depending on the context of the source video data and the scope of the developed application, it is possible to configure the sensitivity of the application to a change in the scene by varying thenumber of singular values involved. The decision about the presence of changes in the context of video scene is made based on a comparison of descriptors of two consecutive images, that is, the values of the Ky Fan norm.
{"title":"KY FAN NORM APPLICATION FOR VIDEO SEGMENTATION","authors":"Myroslava Koliada","doi":"10.15276/hait01.2020.1","DOIUrl":"https://doi.org/10.15276/hait01.2020.1","url":null,"abstract":"This article presents results of applying the KyFan norm in the context of solving the problem of video segmentation. Since the task of video analysis can be considered as analysis of the sequence of images, it was decided to find a way to formalize the description of the video frame using the mathematical apparatus of non-square matrices. When choosing a method, particular attention was paid precisely to universality with respect to the dimension of the initial data due to the technical characteristics and nature of the video data -video frames are matrices of arbitrary dimension. The ability to skip the step of matrix transformation to square dimension, or vectorization using some descriptor allows you to reduce computational costsrequired for this transformation. It was decided to use the value of the Ky Fan norm as an image descriptor, since it is built on top of matrix singular values. As it is known, singular values are calculated during the singular decomposition of the matrix and can be used, among other features, to reduce the dimension of the source data. A singular decomposition does not impose restrictions on either the dimension or the character of the elements of the original matrix. In addition, it can be used to derive other matrix decompositions with required characteristics. A comparative analysis of the effectiveness of the obtained descriptor was carried out in the case of using the k-norm and 1-norm, which showed that the 1-norm allows us to identify the most significant changes in the scene, while k -norm is able to detect minor. In other words, depending on the context of the source video data and the scope of the developed application, it is possible to configure the sensitivity of the application to a change in the scene by varying thenumber of singular values involved. The decision about the presence of changes in the context of video scene is made based on a comparison of descriptors of two consecutive images, that is, the values of the Ky Fan norm.","PeriodicalId":375628,"journal":{"name":"Herald of Advanced Information Technology","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}