Pub Date : 2019-09-19DOI: 10.15622/sp.2019.18.5.1182-1211
E. Doynikova, A. Fedorchenko, Igor Kotenko
The research aims to develop the technique for an automated detection of information system assets and comparative assessment of their criticality for farther security analysis of the target infrastructure. The assets are all information and technology objects of the target infrastructure. The size, heterogeneity, complexity of interconnections, distribution and constant modification of the modern information systems complicate this task. An automated and adaptive determination of information and technology assets and connections between them based on the determination of the static and dynamic objects of the initially uncertain infrastructure is rather challenging problem. The paper proposes dynamic model of connections between objects of the target infrastructure and the technique for its building based on the event correlation approach. The developed technique is based on the statistical analysis of the empirical data on the system events. The technique allows determining main types of analysed infrastructure, their characteristics and hierarchy. The hierarchy is constructed considering the frequency of objects use, and as the result represents their relative criticality for the system operation. For the listed goals the indexes are introduced that determine belonging of properties to the same type, joint use of the properties, as well as dynamic indexes that characterize the variability of properties relative to each other. The resulting model is used for the initial comparative assessment of criticality for the system objects. The paper describes the input data, the developed models and proposed technique for the assets detection and comparison of their criticality. The experiments that demonstrate an application of the developed technique on the example of analyzing security logs of Windows operating system are provided.
{"title":"Automated Detection of Assets and Calculation of their Criticality for the Analysis of Information System Security","authors":"E. Doynikova, A. Fedorchenko, Igor Kotenko","doi":"10.15622/sp.2019.18.5.1182-1211","DOIUrl":"https://doi.org/10.15622/sp.2019.18.5.1182-1211","url":null,"abstract":"The research aims to develop the technique for an automated detection of information system assets and comparative assessment of their criticality for farther security analysis of the target infrastructure. The assets are all information and technology objects of the target infrastructure. The size, heterogeneity, complexity of interconnections, distribution and constant modification of the modern information systems complicate this task. An automated and adaptive determination of information and technology assets and connections between them based on the determination of the static and dynamic objects of the initially uncertain infrastructure is rather challenging problem. The paper proposes dynamic model of connections between objects of the target infrastructure and the technique for its building based on the event correlation approach. The developed technique is based on the statistical analysis of the empirical data on the system events. The technique allows determining main types of analysed infrastructure, their characteristics and hierarchy. The hierarchy is constructed considering the frequency of objects use, and as the result represents their relative criticality for the system operation. For the listed goals the indexes are introduced that determine belonging of properties to the same type, joint use of the properties, as well as dynamic indexes that characterize the variability of properties relative to each other. The resulting model is used for the initial comparative assessment of criticality for the system objects. The paper describes the input data, the developed models and proposed technique for the assets detection and comparison of their criticality. The experiments that demonstrate an application of the developed technique on the example of analyzing security logs of Windows operating system are provided.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85611702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-19DOI: 10.15622/sp.2019.18.5.1093-1118
K. Batenkov
We consider one of communication network structure analysis and synthesis methods, based on the simplest approach to connectivity probability calculation – a method of full network typical state search. In this case, the typical states of the network are understood as the events of network graph connectivity and disconnection, which are simple graph chains and sections. Despite significant drawback of typical state enumeration method, which involves significant calculation complexity, it is quite popular at stage of debugging new analysis methods. In addition, on its basis it is possible to obtain boundary estimates of network connectivity probability. Thus, when calculating Asari–Proshana boundaries use full set of incoherent (top) and cohesive (bottom) communication network states. These boundaries are based on statement that network connectivity probability under same conditions is higher (lower) than that of network composed of independent disjoint (connected) subgraph complete set serial (parallel) connection. When calculating Litvak–Ushakov boundaries, only edge-disjoint sections (for upper) and connected subgraphs (for lower) are used, i.e. subsets of elements such that any element does not meet two-rods. This boundary takes into account the well-known natural monotonicity property, which is to reduce (increase) network reliability with decrease (increase) any element reliability. From a computational view point Asari–Proshana boundaries have huge drawback: they require references of all connected subgraphs to compute upper bounds and all minimal cuts for bottom, which in itself is non-trivial. Litvak–Ushakov boundaries are devoid of these drawback: by calculating them, we can stop at any searching step for variants of sets of independent connected and disconnected graph states.
{"title":"Accurate and Boundary Estimate of Communication Network Connectivity Probability Based on Model State Complete Enumeration Method","authors":"K. Batenkov","doi":"10.15622/sp.2019.18.5.1093-1118","DOIUrl":"https://doi.org/10.15622/sp.2019.18.5.1093-1118","url":null,"abstract":"We consider one of communication network structure analysis and synthesis methods, based on the simplest approach to connectivity probability calculation – a method of full network typical state search. In this case, the typical states of the network are understood as the events of network graph connectivity and disconnection, which are simple graph chains and sections. Despite significant drawback of typical state enumeration method, which involves significant calculation complexity, it is quite popular at stage of debugging new analysis methods. In addition, on its basis it is possible to obtain boundary estimates of network connectivity probability. Thus, when calculating Asari–Proshana boundaries use full set of incoherent (top) and cohesive (bottom) communication network states. These boundaries are based on statement that network connectivity probability under same conditions is higher (lower) than that of network composed of independent disjoint (connected) subgraph complete set serial (parallel) connection. When calculating Litvak–Ushakov boundaries, only edge-disjoint sections (for upper) and connected subgraphs (for lower) are used, i.e. subsets of elements such that any element does not meet two-rods. This boundary takes into account the well-known natural monotonicity property, which is to reduce (increase) network reliability with decrease (increase) any element reliability. From a computational view point Asari–Proshana boundaries have huge drawback: they require references of all connected subgraphs to compute upper bounds and all minimal cuts for bottom, which in itself is non-trivial. Litvak–Ushakov boundaries are devoid of these drawback: by calculating them, we can stop at any searching step for variants of sets of independent connected and disconnected graph states.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90587847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-19DOI: 10.15622/sp.2019.18.5.1119-1148
Julia Doronina, A. Skatkov
An approach is proposed to assess the quality of stationary Markov models without absorbing states on the basis of a measure of statistical stability: the description is formulated and its properties are determined. It is shown that the estimates of statistical stability of models were raised by different authors, either as a methodological aspect of the model quality, or within the framework of other model properties. When solving practical problems of simulation, for example, based on Markov models, there is a pronounced problem of ensuring the dimension of the required samples. On the basis of the introduced formulations, a constructive approach to solving the problems of sample size optimization and statistical volatility analysis of the Markov model to the emerging anomalies with restrictions on the accuracy of the results is proposed, which ensures the required reliability and the exclusion of non-functional redundancy. To analyze the type of transitions in the transition matrix, a measure of its divergence (normalized and centered) is introduced. This measure does not have the completeness of the description and is used as an illustrative characteristic of the models of a certain property. The estimation of the divergence of transition matrices can be useful in the study of models with high sensitivity of detection of the studied properties of objects. The key stages of the approach associated with the study of quasi-homogeneous models are formulated. Quantitative estimates of statistical stability and statistical volatility of the model are proposed on the example of modeling a real technical object with failures, recovery and prevention. The effectiveness of the proposed approaches in solving the problem of statistical stability analysis in the problems of qualimetric analysis of quasi-homogeneous models of complex systems is shown. On the basis of the offered constructive approach the operational tool of decision-making on parametric and functional adjustment of difficult technical objects on long-term and short-term prospects is received.
{"title":"Statistical Stability Analysis of Stationary Markov Models","authors":"Julia Doronina, A. Skatkov","doi":"10.15622/sp.2019.18.5.1119-1148","DOIUrl":"https://doi.org/10.15622/sp.2019.18.5.1119-1148","url":null,"abstract":"An approach is proposed to assess the quality of stationary Markov models without absorbing states on the basis of a measure of statistical stability: the description is formulated and its properties are determined. It is shown that the estimates of statistical stability of models were raised by different authors, either as a methodological aspect of the model quality, or within the framework of other model properties. When solving practical problems of simulation, for example, based on Markov models, there is a pronounced problem of ensuring the dimension of the required samples. On the basis of the introduced formulations, a constructive approach to solving the problems of sample size optimization and statistical volatility analysis of the Markov model to the emerging anomalies with restrictions on the accuracy of the results is proposed, which ensures the required reliability and the exclusion of non-functional redundancy. \u0000To analyze the type of transitions in the transition matrix, a measure of its divergence (normalized and centered) is introduced. This measure does not have the completeness of the description and is used as an illustrative characteristic of the models of a certain property. The estimation of the divergence of transition matrices can be useful in the study of models with high sensitivity of detection of the studied properties of objects. The key stages of the approach associated with the study of quasi-homogeneous models are formulated. \u0000Quantitative estimates of statistical stability and statistical volatility of the model are proposed on the example of modeling a real technical object with failures, recovery and prevention. The effectiveness of the proposed approaches in solving the problem of statistical stability analysis in the problems of qualimetric analysis of quasi-homogeneous models of complex systems is shown. On the basis of the offered constructive approach the operational tool of decision-making on parametric and functional adjustment of difficult technical objects on long-term and short-term prospects is received.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81445812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-19DOI: 10.15622/sp.2019.18.5.1043-1065
S. Chukanov
The paper considers methods for comparison of objects’ images represented by sets of points using computational topology methods. The algorithms for construction of sets of real barcodes for comparison of objects’ images are proposed. The determination of barcodes of object forms allows us to study continuous and discrete structures, making it useful in computational topology. A distinctive feature of the use of the proposed comparison methods versus the methods of algebraic topology is obtaining more information about objects’ form. An important area of application of real-valued barcodes is studying invariants of big data. Proposed method combines the technology of barcodes construction with embedded non-geometrical information (color, time of formation, pen pressure), represented as functions of simplicial complexes. To do this, barcodes are expanded with functions from simplexes to represent heterogeneous information. The proposed structure of extended barcodes increases the effectiveness of persistent homology methods when comparing images and pattern recognition. A modification of the Wasserstein method is proposed for finding the distance between images by introducing non-geometric information about the distances between images, due to inequalities of the functions of the source and terminal images of the corresponding simplexes. The geometric characteristics of an object can change with diffeomorphic deformations; the proposed algorithms for the formation of expanded image barcodes are invariant to rotation and translation transformations. We considered a method for determining the distance between sets of points representing the curves, taking into account an orientation of curves’ segments. The article is intended for a reader who is familiar with basic concepts of algebraic and computational topology, the theory of Lie groups, and diffeomorphic transformations.
{"title":"Comparison of Objects’ Images based on Computational Topology Methods","authors":"S. Chukanov","doi":"10.15622/sp.2019.18.5.1043-1065","DOIUrl":"https://doi.org/10.15622/sp.2019.18.5.1043-1065","url":null,"abstract":"The paper considers methods for comparison of objects’ images represented by sets of points using computational topology methods. The algorithms for construction of sets of real barcodes for comparison of objects’ images are proposed. The determination of barcodes of object forms allows us to study continuous and discrete structures, making it useful in computational topology. A distinctive feature of the use of the proposed comparison methods versus the methods of algebraic topology is obtaining more information about objects’ form. An important area of application of real-valued barcodes is studying invariants of big data. Proposed method combines the technology of barcodes construction with embedded non-geometrical information (color, time of formation, pen pressure), represented as functions of simplicial complexes. To do this, barcodes are expanded with functions from simplexes to represent heterogeneous information. The proposed structure of extended barcodes increases the effectiveness of persistent homology methods when comparing images and pattern recognition. A modification of the Wasserstein method is proposed for finding the distance between images by introducing non-geometric information about the distances between images, due to inequalities of the functions of the source and terminal images of the corresponding simplexes. The geometric characteristics of an object can change with diffeomorphic deformations; the proposed algorithms for the formation of expanded image barcodes are invariant to rotation and translation transformations. We considered a method for determining the distance between sets of points representing the curves, taking into account an orientation of curves’ segments. The article is intended for a reader who is familiar with basic concepts of algebraic and computational topology, the theory of Lie groups, and diffeomorphic transformations.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82519082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-19DOI: 10.15622/sp.2019.18.5.1066-1092
Y. Senkevich, Y. Marapulets, O. Lukovenkova, A. Solodchuk
Studies of geoacoustic emission in a seismically active region in Kamchatka show that geoacoustic signals produce pronounced pulse anomalies during the earthquake preparation and post-seismic relaxation of the local stresses field at the observation point. The qualitative selection of such anomalies is complicated by a strong distortion and weakening of the signal amplitude. A review of existing acoustic emission analysis methods shows that most often researchers turn to the analysis of more accessible to study statistical properties and energy of signals. The distinctive features of the approach proposed by the authors are the extraction of informative features based on the analysis of time and frequency-time structures of geoacoustic signals and the description of various forms of recognizable pulses by a limited pattern set. This study opens up new ideas to develop methods for detecting anomalous behavior of geoacoustic signals, including anomalies before earthquakes. The paper describes a technique of information extraction from geoacoustic emission pulse streams of sound frequency range. A geoacoustic pulse mathematical model, reflecting the signal generation process from a variety of elementary sources, is presented. A solution to the problem of detection of geoacoustic signal informative features is presented by the means of description of signal fragments by the matrixes of local extrema amplitude ratios and of interval ratios between them. The result of applying the developed algorithm to describe automatically the structure of the detected pulses and to form a pattern set is shown. The patterns characterize the features of geoacoustic emission signals observed at IKIR FEB RAS field stations. A technique of reduction of the detected pulse set dimensions is presented. It allows us to find patterns similar in structure. A solution to the problem of processing of a large data flow by unifying pulses description and their systematization is proposed. A method to identify a geoacoustic emission pulse model using sparse approximation schemes is suggested. An algorithmic solution of the problem of reducing the computational complexity of the matching pursuit method is described. It is to include an iterative refinement algorithm for the solution at each step in the method. The results of the research allowed the authors to create a tool to investigate the dynamic properties of geoacoustic emission signal in order to develop earthquake prediction detectors.
对堪察加地震活跃区地声发射的研究表明,地声信号在地震准备过程中产生明显的脉冲异常,并在地震后引起观测点局部应力场的松弛。这种异常的定性选择由于信号幅度的强烈失真和减弱而变得复杂。对现有声发射分析方法的回顾表明,研究人员往往转向更容易研究信号的统计性质和能量的分析。该方法的特点是通过分析地声信号的时间和频率-时间结构提取信息特征,并通过有限模式集描述各种形式的可识别脉冲。本研究为开发探测包括地震前异常在内的地声信号异常行为的方法开辟了新的思路。本文介绍了一种从声发射脉冲流中提取信息的方法。提出了一个反映各种基本源信号产生过程的地声脉冲数学模型。提出了用局部极值幅度比矩阵和区间比矩阵描述信号片段的方法,解决了地声信号信息特征的检测问题。最后给出了应用该算法自动描述被测脉冲的结构并形成模式集的结果。这些模式描述了ikiir FEB - RAS野外台站观测到的地声发射信号的特征。提出了一种减小被检测脉冲集尺寸的方法。它能让我们找到结构相似的模式。提出了一种统一脉冲描述及其系统化的方法来解决大数据流处理问题。提出了一种利用稀疏逼近方法识别地声发射脉冲模型的方法。描述了降低匹配追踪方法计算复杂度问题的算法解决方案。它是在方法的每一步都包含一个解的迭代细化算法。研究结果使作者能够创建一个工具来研究地声发射信号的动态特性,从而开发地震预测探测器。
{"title":"Technique of Informative Features Selection in Geoacoustic Emission Signals","authors":"Y. Senkevich, Y. Marapulets, O. Lukovenkova, A. Solodchuk","doi":"10.15622/sp.2019.18.5.1066-1092","DOIUrl":"https://doi.org/10.15622/sp.2019.18.5.1066-1092","url":null,"abstract":"Studies of geoacoustic emission in a seismically active region in Kamchatka show that geoacoustic signals produce pronounced pulse anomalies during the earthquake preparation and post-seismic relaxation of the local stresses field at the observation point. The qualitative selection of such anomalies is complicated by a strong distortion and weakening of the signal amplitude. A review of existing acoustic emission analysis methods shows that most often researchers turn to the analysis of more accessible to study statistical properties and energy of signals. The distinctive features of the approach proposed by the authors are the extraction of informative features based on the analysis of time and frequency-time structures of geoacoustic signals and the description of various forms of recognizable pulses by a limited pattern set. This study opens up new ideas to develop methods for detecting anomalous behavior of geoacoustic signals, including anomalies before earthquakes. \u0000The paper describes a technique of information extraction from geoacoustic emission pulse streams of sound frequency range. A geoacoustic pulse mathematical model, reflecting the signal generation process from a variety of elementary sources, is presented. A solution to the problem of detection of geoacoustic signal informative features is presented by the means of description of signal fragments by the matrixes of local extrema amplitude ratios and of interval ratios between them. The result of applying the developed algorithm to describe automatically the structure of the detected pulses and to form a pattern set is shown. The patterns characterize the features of geoacoustic emission signals observed at IKIR FEB RAS field stations. A technique of reduction of the detected pulse set dimensions is presented. It allows us to find patterns similar in structure. A solution to the problem of processing of a large data flow by unifying pulses description and their systematization is proposed. A method to identify a geoacoustic emission pulse model using sparse approximation schemes is suggested. An algorithmic solution of the problem of reducing the computational complexity of the matching pursuit method is described. It is to include an iterative refinement algorithm for the solution at each step in the method. The results of the research allowed the authors to create a tool to investigate the dynamic properties of geoacoustic emission signal in order to develop earthquake prediction detectors.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91106375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-31DOI: 10.15622/SP.2019.18.4.809-830
N. Kuznetsov, K. Semenikhin
The data transmission process is modelled by a Markov closed queuing network, which consists of two stations. The primary station describes the process of sending packets over a lossy channel by means of a finite and single-channel queue. The auxiliary station, being a multichannel queuing system, accumulates packets lost by the primary station and forwards them back for retrial. The transmission rate at the primary station and the retrial rate at the auxiliary station are in the specified ranges and are subject to optimization in order to minimize the time of successful delivery and the amount of network resources used. The explicit expressions for these characteristics are derived in the steady-state mode in order to formulate the problem of bi-criterion optimization. The optimal policies are established in two scenarios: the first problem is to minimize the average time of successful transmission with limited resources; the second problem is to minimize the consumption of network resources under the constraint on the time for successful transmission. The set of Pareto-optimal policies is obtained by solving the problem of minimization of the augmented functional. The quality characteristics of approximate solutions that do not take into account the service rate in the auxiliary system are analyzed.
{"title":"Parametric Optimization of Packet Transmission with Resending Packets Mechanism","authors":"N. Kuznetsov, K. Semenikhin","doi":"10.15622/SP.2019.18.4.809-830","DOIUrl":"https://doi.org/10.15622/SP.2019.18.4.809-830","url":null,"abstract":"The data transmission process is modelled by a Markov closed queuing network, which consists of two stations. The primary station describes the process of sending packets over a lossy channel by means of a finite and single-channel queue. The auxiliary station, being a multichannel queuing system, accumulates packets lost by the primary station and forwards them back for retrial. The transmission rate at the primary station and the retrial rate at the auxiliary station are in the specified ranges and are subject to optimization in order to minimize the time of successful delivery and the amount of network resources used. The explicit expressions for these characteristics are derived in the steady-state mode in order to formulate the problem of bi-criterion optimization. The optimal policies are established in two scenarios: the first problem is to minimize the average time of successful transmission with limited resources; the second problem is to minimize the consumption of network resources under the constraint on the time for successful transmission. The set of Pareto-optimal policies is obtained by solving the problem of minimization of the augmented functional. The quality characteristics of approximate solutions that do not take into account the service rate in the auxiliary system are analyzed.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75346300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-18DOI: 10.15622/SP.2019.18.4.858-886
O. Karsaev
The objects of the research are networks and information interactions in low-orbit satellite constellations performing tasks of remote sensing of the Earth. Research of network creation questions in this case is a necessary condition as opportunities and efficiency of information interaction directly depend on opportunities of a network. DTN (Delay-and-Disruption Tolerant Networking) technology is a basis of the network creation and CGR (Contact Graph Routing) approach is a basis of message routing. DTN technology and CGR approach are originally developed and used to provide communication with spacecraft located in a deep space. Therefore, the article discusses issues and problems arising in the context of their use in relation to low-orbit satellite constellations. The purpose of the information interaction study is development of effective interaction schemes (protocols). In the paper, the schemes of information interaction that can be used by a group of satellites in case of autonomous planning are considered. Along with autonomous planning, the paper also considers information interaction that can be used to implement network control of a satellite constellation in the case of ground planning. The effectiveness of the information interaction schemes are assessed by efficiency of orders’ execution. Measurement of efficiency is estimated via simulation of the communication network and the corresponding scheme of information interaction.
{"title":"Analysis of Information Interaction Efficiency in Low-Orbit Satellite Constellations","authors":"O. Karsaev","doi":"10.15622/SP.2019.18.4.858-886","DOIUrl":"https://doi.org/10.15622/SP.2019.18.4.858-886","url":null,"abstract":"The objects of the research are networks and information interactions in low-orbit satellite constellations performing tasks of remote sensing of the Earth. Research of network creation questions in this case is a necessary condition as opportunities and efficiency of information interaction directly depend on opportunities of a network. DTN (Delay-and-Disruption Tolerant Networking) technology is a basis of the network creation and CGR (Contact Graph Routing) approach is a basis of message routing. DTN technology and CGR approach are originally developed and used to provide communication with spacecraft located in a deep space. Therefore, the article discusses issues and problems arising in the context of their use in relation to low-orbit satellite constellations. The purpose of the information interaction study is development of effective interaction schemes (protocols). In the paper, the schemes of information interaction that can be used by a group of satellites in case of autonomous planning are considered. Along with autonomous planning, the paper also considers information interaction that can be used to implement network control of a satellite constellation in the case of ground planning. The effectiveness of the information interaction schemes are assessed by efficiency of orders’ execution. Measurement of efficiency is estimated via simulation of the communication network and the corresponding scheme of information interaction.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84998641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-18DOI: 10.15622/SP.2019.18.4.949-975
V. Senchenkov, D. Absalyamov, D. Avsyukevich
The development of methodical and mathematical apparatus for formation of a set of diagnostic parameters of complex technical systems, the content of which consists of processing the trajectories of the output processes of the system using the theory of functional spaces, is considered in this paper. The trajectories of the output variables are considered as Lebesgue measurable functions. It ensures a unified approach to obtaining diagnostic parameters regardless a physical nature of these variables and a set of their jump-like changes (finite discontinuities of trajectories). It adequately takes into account a complexity of the construction, a variety of physical principles and algorithms of systems operation. A structure of factor-spaces of measurable square Lebesgue integrable functions, ( spaces) is defined on sets of trajectories. The properties of these spaces allow to decompose the trajectories by the countable set of mutually orthogonal directions and represent them in the form of a convergent series. The choice of a set of diagnostic parameters as an ordered sequence of coefficients of decomposition of trajectories into partial sums of Fourier series is substantiated. The procedure of formation of a set of diagnostic parameters of the system, improved in comparison with the initial variants, when the trajectory is decomposed into a partial sum of Fourier series by an orthonormal Legendre basis, is presented. A method for the numerical determination of the power of such a set is proposed. New aspects of obtaining diagnostic information from the vibration processes of the system are revealed. A structure of spaces of continuous square Riemann integrable functions ( spaces) is defined on the sets of vibrotrajectories. Since they are subspaces in the afore mentioned factor-spaces, the general methodological bases for the transformation of vibrotrajectories remain unchanged. However, the algorithmic component of the choice of diagnostic parameters becomes more specific and observable. It is demonstrated by implementing a numerical procedure for decomposing vibrotrajectories by an orthogonal trigonometric basis, which is contained in spaces. The processing of the results of experimental studies of the vibration process and the setting on this basis of a subset of diagnostic parameters in one of the control points of the system is provided. The materials of the article are a contribution to the theory of obtaining information about the technical condition of complex systems. The applied value of the proposed development is a possibility of their use for the synthesis of algorithmic support of automated diagnostic tools.
{"title":"Definition of Set of diagnostic Parameters of System based on the Functional Spaces Theory","authors":"V. Senchenkov, D. Absalyamov, D. Avsyukevich","doi":"10.15622/SP.2019.18.4.949-975","DOIUrl":"https://doi.org/10.15622/SP.2019.18.4.949-975","url":null,"abstract":"The development of methodical and mathematical apparatus for formation of a set of diagnostic parameters of complex technical systems, the content of which consists of processing the trajectories of the output processes of the system using the theory of functional spaces, is considered in this paper. The trajectories of the output variables are considered as Lebesgue measurable functions. It ensures a unified approach to obtaining diagnostic parameters regardless a physical nature of these variables and a set of their jump-like changes (finite discontinuities of trajectories). It adequately takes into account a complexity of the construction, a variety of physical principles and algorithms of systems operation. A structure of factor-spaces of measurable square Lebesgue integrable functions, ( spaces) is defined on sets of trajectories. The properties of these spaces allow to decompose the trajectories by the countable set of mutually orthogonal directions and represent them in the form of a convergent series. The choice of a set of diagnostic parameters as an ordered sequence of coefficients of decomposition of trajectories into partial sums of Fourier series is substantiated. The procedure of formation of a set of diagnostic parameters of the system, improved in comparison with the initial variants, when the trajectory is decomposed into a partial sum of Fourier series by an orthonormal Legendre basis, is presented. A method for the numerical determination of the power of such a set is proposed. \u0000New aspects of obtaining diagnostic information from the vibration processes of the system are revealed. A structure of spaces of continuous square Riemann integrable functions ( spaces) is defined on the sets of vibrotrajectories. Since they are subspaces in the afore mentioned factor-spaces, the general methodological bases for the transformation of vibrotrajectories remain unchanged. However, the algorithmic component of the choice of diagnostic parameters becomes more specific and observable. It is demonstrated by implementing a numerical procedure for decomposing vibrotrajectories by an orthogonal trigonometric basis, which is contained in spaces. The processing of the results of experimental studies of the vibration process and the setting on this basis of a subset of diagnostic parameters in one of the control points of the system is provided. \u0000The materials of the article are a contribution to the theory of obtaining information about the technical condition of complex systems. The applied value of the proposed development is a possibility of their use for the synthesis of algorithmic support of automated diagnostic tools.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"170 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85971139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-18DOI: 10.15622/SP.2019.18.4.1010-1036
V. Manoilov, A. Borodinov, I. Zarutsky, A. Petrov, V. Kurochkin
Determination of the nucleotide sequence of DNA or RNA containing from several hundred to hundreds of millions of monomers units allows to obtain detailed information about the genome of humans, animals and plants. The deciphering of nucleic acids’ structure was learned quite a long time ago, but initially the decoding methods were low-performing, inefficient and expensive. Methods for decoding nucleotide nucleic acid sequences are usually called sequencing methods. Instruments designed to implement sequencing methods are called sequencers. Sequencing new generation (SNP), mass parallel sequencing are related terms that describe the technology of high-performance DNA sequencing in which the entire human genome can be sequenced within a day or two. The previous technology used to decipher the human genome required more than ten years to get final results. A hardware-software complex (HSC) is being developed to decipher the nucleic acid sequence (NA) of pathogenic microorganisms using the method of NGS in the Institute for Analytical Instrumentation of the Russian Academy of Sciences. The software included in the HSC plays an essential role in solving genome deciphering problems. The purpose of this article is to show the need to create algorithms for the software of the HSC for processing signals obtained in the process of genetic analysis when solving genome deciphering problems, and also to demonstrate the capabilities of these algorithms. The paper discusses the main problems of signal processing and methods for solving them, including: automatic and semi-automatic focusing, background correction, detection of cluster images, estimation of the coordinates of their positions, creation of templates of clusters of NA molecules on the surface of the reaction cell, correction of influence neighboring optical channels for intensities of signals and the assessment of the reliability of the results of genetic analysis
{"title":"Algorithms of Processing Fluorescence Signals for Mass Parallel Sequencing of Nucleic Acids","authors":"V. Manoilov, A. Borodinov, I. Zarutsky, A. Petrov, V. Kurochkin","doi":"10.15622/SP.2019.18.4.1010-1036","DOIUrl":"https://doi.org/10.15622/SP.2019.18.4.1010-1036","url":null,"abstract":"Determination of the nucleotide sequence of DNA or RNA containing from several hundred to hundreds of millions of monomers units allows to obtain detailed information about the genome of humans, animals and plants. The deciphering of nucleic acids’ structure was learned quite a long time ago, but initially the decoding methods were low-performing, inefficient and expensive. Methods for decoding nucleotide nucleic acid sequences are usually called sequencing methods. Instruments designed to implement sequencing methods are called sequencers. \u0000Sequencing new generation (SNP), mass parallel sequencing are related terms that describe the technology of high-performance DNA sequencing in which the entire human genome can be sequenced within a day or two. The previous technology used to decipher the human genome required more than ten years to get final results. \u0000A hardware-software complex (HSC) is being developed to decipher the nucleic acid sequence (NA) of pathogenic microorganisms using the method of NGS in the Institute for Analytical Instrumentation of the Russian Academy of Sciences. \u0000The software included in the HSC plays an essential role in solving genome deciphering problems. The purpose of this article is to show the need to create algorithms for the software of the HSC for processing signals obtained in the process of genetic analysis when solving genome deciphering problems, and also to demonstrate the capabilities of these algorithms. \u0000The paper discusses the main problems of signal processing and methods for solving them, including: automatic and semi-automatic focusing, background correction, detection of cluster images, estimation of the coordinates of their positions, creation of templates of clusters of NA molecules on the surface of the reaction cell, correction of influence neighboring optical channels for intensities of signals and the assessment of the reliability of the results of genetic analysis","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78240433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-18DOI: 10.15622/SP.2019.18.4.912-948
V. Starodubtsev
An algorithm for the formation of the quinary Gordon-Mills-Welch sequences (GMWS) with a period of N=54-1=624 over a finite field with a double extension GF[(52)2] is proposed. The algorithm is based on a matrix representation of a basic M-sequence (MS) with a primitive verification polynomial hмs(x) and a similar period. The transition to non-binary sequences is determined by the increased requirements for the information content of the information transfer processes, the speed of transmission through communication channels and the structural secrecy of the transmitted messages. It is demonstrated that the verification polynomial hG(x) of the GMWS can be represented as a product of fourth-degree polynomials-factors that are indivisible over a simple field GF(5). The relations between roots of the polynomial hмs(x) of the basic MS and roots of the polynomials hсi(x) are obtained. The entire list of GMWS with a period N=624 can be formed on the basis of the obtained ratios. It is demonstrated that for each of the 48 primitive fourth-degree polynomials that are test polynomials for basis MS, three GMWS with equivalent linear complexity (ELC) of ls=12, 24, 40 can be formed. The total number of quinary GMWS with period of N=624 is equal to 144. A device for the formation of a GMWS as a set of shift registers with linear feedbacks is presented. The mod5 multipliers and summators in registers are arranged in accordance with the coefficients of indivisible polynomials hсi(x). The symbols from the registers come to the adder mod5, on the output of which the GMWS is formed. Depending on the required ELC, the GMWS forming device consists of three, six or ten registers. The initial state of cells of the shift registers is determined by the decimation of the symbols of the basic MS at the indexes of decimation, equal to the minimum of the exponents of the roots of polynomials hсi(x). A feature of determining the initial States of the devices for the formation of quinary GMWS with respect to binary sequences is the presence of cyclic shifts of the summed sequences by a multiple of N/(p–1). The obtained results allow to synthesize the devices for the formation of a complete list of 144 quinary GMWS with a period of N=624 and different ELC. The results can also be used to construct other classes of pseudo-random sequences that allow analytical representation in finite fields.
{"title":"Formation of Quinary Gordon-Mills-Welch Sequences for Discrete Information Transmission Systems","authors":"V. Starodubtsev","doi":"10.15622/SP.2019.18.4.912-948","DOIUrl":"https://doi.org/10.15622/SP.2019.18.4.912-948","url":null,"abstract":"An algorithm for the formation of the quinary Gordon-Mills-Welch sequences (GMWS) with a period of N=54-1=624 over a finite field with a double extension GF[(52)2] is proposed. The algorithm is based on a matrix representation of a basic M-sequence (MS) with a primitive verification polynomial hмs(x) and a similar period. The transition to non-binary sequences is determined by the increased requirements for the information content of the information transfer processes, the speed of transmission through communication channels and the structural secrecy of the transmitted messages. It is demonstrated that the verification polynomial hG(x) of the GMWS can be represented as a product of fourth-degree polynomials-factors that are indivisible over a simple field GF(5). The relations between roots of the polynomial hмs(x) of the basic MS and roots of the polynomials hсi(x) are obtained. The entire list of GMWS with a period N=624 can be formed on the basis of the obtained ratios. It is demonstrated that for each of the 48 primitive fourth-degree polynomials that are test polynomials for basis MS, three GMWS with equivalent linear complexity (ELC) of ls=12, 24, 40 can be formed. The total number of quinary GMWS with period of N=624 is equal to 144. A device for the formation of a GMWS as a set of shift registers with linear feedbacks is presented. The mod5 multipliers and summators in registers are arranged in accordance with the coefficients of indivisible polynomials hсi(x). The symbols from the registers come to the adder mod5, on the output of which the GMWS is formed. Depending on the required ELC, the GMWS forming device consists of three, six or ten registers. The initial state of cells of the shift registers is determined by the decimation of the symbols of the basic MS at the indexes of decimation, equal to the minimum of the exponents of the roots of polynomials hсi(x). A feature of determining the initial States of the devices for the formation of quinary GMWS with respect to binary sequences is the presence of cyclic shifts of the summed sequences by a multiple of N/(p–1). The obtained results allow to synthesize the devices for the formation of a complete list of 144 quinary GMWS with a period of N=624 and different ELC. The results can also be used to construct other classes of pseudo-random sequences that allow analytical representation in finite fields.","PeriodicalId":53447,"journal":{"name":"SPIIRAS Proceedings","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81745511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}