One of the key factors of successful learning is the control of mastering the material that has been passed, which can be implemented, including by performing control test tasks. The work is devoted to the use of the desktop publishing system LuaLaTeX to automate the process of forming test control papers. The control generated with its help includes a set number of randomly selected test tasks on the specified topics, followed by mixing both the questions themselves and their answer options. Each generated control work is personalized, i.e. generated directly for a specific student, and is a control work in the form of a PDF-document. To achieve the goal of the work, the capabilities of the LuaLaTeX publishing system are used, which include: the Lua scripting language, which is used to form a pool of control work tasks and personalize it; the TeX language is used to design the content of the control work being formed; the capabilities of the PDF format, including the JavaScript programming language, implement automated verification of control work, demonstration of correct answers to questions, preservation of the test result and provide additional security, which consists in the use of hashes instead of correct answers to tasks and a password. As a practical significance of the proposed solution, we can note an increase in the efficiency of the process of controlling the development of the material passed by students by minimizing the speed of preparation of test control works and their verification, as well as the use of a minimum software tools for its implementation.
{"title":"The Method for Implementing Test Constructors Using the Lualatex Publishing System","authors":"Y. Polishuk, Ya. V. Goncharova","doi":"10.17587/it.30.50-55","DOIUrl":"https://doi.org/10.17587/it.30.50-55","url":null,"abstract":"One of the key factors of successful learning is the control of mastering the material that has been passed, which can be implemented, including by performing control test tasks. The work is devoted to the use of the desktop publishing system LuaLaTeX to automate the process of forming test control papers. The control generated with its help includes a set number of randomly selected test tasks on the specified topics, followed by mixing both the questions themselves and their answer options. Each generated control work is personalized, i.e. generated directly for a specific student, and is a control work in the form of a PDF-document. To achieve the goal of the work, the capabilities of the LuaLaTeX publishing system are used, which include: the Lua scripting language, which is used to form a pool of control work tasks and personalize it; the TeX language is used to design the content of the control work being formed; the capabilities of the PDF format, including the JavaScript programming language, implement automated verification of control work, demonstration of correct answers to questions, preservation of the test result and provide additional security, which consists in the use of hashes instead of correct answers to tasks and a password. As a practical significance of the proposed solution, we can note an increase in the efficiency of the process of controlling the development of the material passed by students by minimizing the speed of preparation of test control works and their verification, as well as the use of a minimum software tools for its implementation.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"96 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139612440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article is devoted to the construction of simulation models of customer delay in the queue in the form of a queuing system (QS) described by both ordinary and right-shifted hyper-Erlang and second-order Erlang distributions. This article is a logical continuation of previous works devoted to the construction of numerical-analytical QS models with shifted distribution laws. In the article, the Erlang distribution is considered as a special case of the more general Gamma distribution law, in contrast to the normalized Erlang distribution. These two forms of the Erlang distribution differ in numerical characteristics, except for the coefficient of variation. To solve the problem, the system of discrete-event modeling GPSS WORLD was used.
{"title":"Simulation Modeling of QS with Hyper-Erlang and Erlang Distributions","authors":"V. Tarasov, N. Bakhareva","doi":"10.17587/it.30.3-12","DOIUrl":"https://doi.org/10.17587/it.30.3-12","url":null,"abstract":"The article is devoted to the construction of simulation models of customer delay in the queue in the form of a queuing system (QS) described by both ordinary and right-shifted hyper-Erlang and second-order Erlang distributions. This article is a logical continuation of previous works devoted to the construction of numerical-analytical QS models with shifted distribution laws. In the article, the Erlang distribution is considered as a special case of the more general Gamma distribution law, in contrast to the normalized Erlang distribution. These two forms of the Erlang distribution differ in numerical characteristics, except for the coefficient of variation. To solve the problem, the system of discrete-event modeling GPSS WORLD was used.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"84 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. N. Kramskoi, A. O. Roniashkevich, D. Tali, O. A. Finko
The results of an analysis of the functioning of the electronic document management system in the context of the development of methods and means of cryptanalysis and an avalanche increase in the number of processed documents are presented. A conceptual model and method for decomposing the storage subsystem into contours of operational and long-term storage of electronic documents are proposed. A mathematical model of hierarchical control of the integrity of electronic documents is presented, which takes into account the integrative properties of electronic documents, as well as requirements for storage periods and information security. The methodology and results of assessing the advantages of the developed technical solution based on the logical-probabilistic method are presented.
{"title":"A Model of an Automated Electronic Document Management System Operating under Conditions of Probable Compromise of Signature Keys, Based on a Hierarchical Decomposition of Trusted Storage Environments","authors":"N. N. Kramskoi, A. O. Roniashkevich, D. Tali, O. A. Finko","doi":"10.17587/it.30.13-22","DOIUrl":"https://doi.org/10.17587/it.30.13-22","url":null,"abstract":"The results of an analysis of the functioning of the electronic document management system in the context of the development of methods and means of cryptanalysis and an avalanche increase in the number of processed documents are presented. A conceptual model and method for decomposing the storage subsystem into contours of operational and long-term storage of electronic documents are proposed. A mathematical model of hierarchical control of the integrity of electronic documents is presented, which takes into account the integrative properties of electronic documents, as well as requirements for storage periods and information security. The methodology and results of assessing the advantages of the developed technical solution based on the logical-probabilistic method are presented.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"60 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Shcherban, V. S. Fedotova, N. E. Kirilenko, A. Matukhno, O. Shcherban, L. V. Lysenko
A method has been developed for localizing spatiotemporal patterns observed in sequentially recorded biomedical images of laser scanning microscopy and reflecting the dynamics of the biological structures under study. By means of interpolation by radial basis functions of each individual image, a compact mathematical model of the space-time dynamics of the brightness function on a sequence of images is obtained. The subsequent localization of the structures of the sought-for dynamic patterns is carried out by means of the mathematical apparatus of singular spectral analysis. The results of experiments on optical visualization of the activity patterns of the olfactory bulb of a macrosmatic (rat) confirmed the efficiency of the developed method for localizing the reaction to biomarkers of human oncological diseases.
{"title":"Method of Localization of Spatiotemporal Patterns on the Time Sequence of Biomedical Images","authors":"I. Shcherban, V. S. Fedotova, N. E. Kirilenko, A. Matukhno, O. Shcherban, L. V. Lysenko","doi":"10.17587/it.30.42-49","DOIUrl":"https://doi.org/10.17587/it.30.42-49","url":null,"abstract":"A method has been developed for localizing spatiotemporal patterns observed in sequentially recorded biomedical images of laser scanning microscopy and reflecting the dynamics of the biological structures under study. By means of interpolation by radial basis functions of each individual image, a compact mathematical model of the space-time dynamics of the brightness function on a sequence of images is obtained. The subsequent localization of the structures of the sought-for dynamic patterns is carried out by means of the mathematical apparatus of singular spectral analysis. The results of experiments on optical visualization of the activity patterns of the olfactory bulb of a macrosmatic (rat) confirmed the efficiency of the developed method for localizing the reaction to biomarkers of human oncological diseases.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"68 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article presents a technique for creating and training an artificial neural network to recognize network traffic anomalies using relatively small samples of collected data to generate training data. Various data sources for machine learning and approaches to network traffic analysis are considered. There are data format and the method of generating them from the collected network traffic is described, as well as the steps of the methodology in detail. Using the technique, an artificial neural network was created and trained for the task of recognizing anomalies in the network traffic of the ICMP protocol. The results of testing and comparing various artificial neural network configurations and learning conditions for a given task are presented. The artificial neural network trained according to the method was tested on real network traffic. The presented technique can be applied without requiring changes to detect anomalies of various network protocols and network traffic using a suitable parameterizer and data markup.
{"title":"A Technique for Creating and Training an Artificial Neural Network to Detect Network Traffic Anomalies","authors":"S. O. Ivanov","doi":"10.17587/it.30.32-41","DOIUrl":"https://doi.org/10.17587/it.30.32-41","url":null,"abstract":"The article presents a technique for creating and training an artificial neural network to recognize network traffic anomalies using relatively small samples of collected data to generate training data. Various data sources for machine learning and approaches to network traffic analysis are considered. There are data format and the method of generating them from the collected network traffic is described, as well as the steps of the methodology in detail. Using the technique, an artificial neural network was created and trained for the task of recognizing anomalies in the network traffic of the ICMP protocol. The results of testing and comparing various artificial neural network configurations and learning conditions for a given task are presented. The artificial neural network trained according to the method was tested on real network traffic. The presented technique can be applied without requiring changes to detect anomalies of various network protocols and network traffic using a suitable parameterizer and data markup.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"4 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139524994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of assessing the stability of an element of the information infrastructure of the automated control system to single impacts of threats to information security is considered. For the solution, it is proposed to reduce the problem of stability estimation to the problem of constructing the survivability function of the element under study and determining its extreme values. Functional analysis methods were used to solve the problem. The problem is solved subject to the adoption of exponential laws of distribution of random moments of exposure time and moments of recovery time of the element's function. Recommendations are given on the formation of initial data for modeling, the results of test modeling in the form of graphs of the survivability function. A computational experiment with a mathematical model was carried out, aimed at investigating the effectiveness of the proposed model. A mathematical model can be used to construct a survivability function with a single impact and restore the functionality of an element exposed to threats. The novelty of the result lies in the concretization of general and particular mathematical models designed to assess the stability of functioning under conditions of repeated exposure to threats, which makes it possible to significantly simplify the model with acceptable reliability of the result obtained.
{"title":"A Mathematical Model for Assessing the Stability of the Functioning of an Element of the Information Infrastructure of an Automated Control System Exposed to Threats to Information Security","authors":"V. A. Voevodin","doi":"10.17587/it.30.23-31","DOIUrl":"https://doi.org/10.17587/it.30.23-31","url":null,"abstract":"The problem of assessing the stability of an element of the information infrastructure of the automated control system to single impacts of threats to information security is considered. For the solution, it is proposed to reduce the problem of stability estimation to the problem of constructing the survivability function of the element under study and determining its extreme values. Functional analysis methods were used to solve the problem. The problem is solved subject to the adoption of exponential laws of distribution of random moments of exposure time and moments of recovery time of the element's function. Recommendations are given on the formation of initial data for modeling, the results of test modeling in the form of graphs of the survivability function. A computational experiment with a mathematical model was carried out, aimed at investigating the effectiveness of the proposed model. A mathematical model can be used to construct a survivability function with a single impact and restore the functionality of an element exposed to threats. The novelty of the result lies in the concretization of general and particular mathematical models designed to assess the stability of functioning under conditions of repeated exposure to threats, which makes it possible to significantly simplify the model with acceptable reliability of the result obtained.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"45 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139611866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardinality estimation (CardEst) plays an important role in creating high-quality query execution plans in the DBMS. In the last decade, a large number of methods have been developed: traditional methods (histograms, samples), machine learning methods based on queries or data. But all of them are based on different restrictions and assumptions, and the cardinality estimation with their help worsens with an increase in the number of joined tables. The article proposes two new methods based on the theory of approximate calculation of aggregates and allowing to remove most of the restrictions. Method 1 fetches blocks after executing the subqueries of the original query and does not require a preliminary analysis of the filter conditions. The condition for joining tables can be arbitrary (not necessarily the equality of attributes). Method 2 allows you to calculate the probabilities of reading blocks based on the metadata accumulated in the process of populating the database. Metadata takes up little memory, and the overhead of maintaining it is low. Method 2, in contrast to method 1, takes into account the true cardinality of the connection of neighboring blocks in the sample obtained from metadata. Therefore, the prospect opens up for a more accurate assessment of the cardinality of joining a large number of tables. Method 1 (EVACAR method) is implemented and compared with modern machine learning methods BayesCard, DeepDB, FLAT on a special STATS test. The results of the experiments confirmed the effectiveness of the EVACAR method. The EVACAR method is more accurate or its maximum q-error is comparable to machine learning methods for 75-88 % of evaluated queries (subplans). In the future, it is planned to implement the 2nd method for assessing the cardinality of queries.
{"title":"Estimating the Cardinality of Queries Based on a Sample from a Full Outer Join of Tables","authors":"U. A. Grigorev","doi":"10.17587/it.29.650-663","DOIUrl":"https://doi.org/10.17587/it.29.650-663","url":null,"abstract":"Cardinality estimation (CardEst) plays an important role in creating high-quality query execution plans in the DBMS. In the last decade, a large number of methods have been developed: traditional methods (histograms, samples), machine learning methods based on queries or data. But all of them are based on different restrictions and assumptions, and the cardinality estimation with their help worsens with an increase in the number of joined tables. The article proposes two new methods based on the theory of approximate calculation of aggregates and allowing to remove most of the restrictions. Method 1 fetches blocks after executing the subqueries of the original query and does not require a preliminary analysis of the filter conditions. The condition for joining tables can be arbitrary (not necessarily the equality of attributes). Method 2 allows you to calculate the probabilities of reading blocks based on the metadata accumulated in the process of populating the database. Metadata takes up little memory, and the overhead of maintaining it is low. Method 2, in contrast to method 1, takes into account the true cardinality of the connection of neighboring blocks in the sample obtained from metadata. Therefore, the prospect opens up for a more accurate assessment of the cardinality of joining a large number of tables. Method 1 (EVACAR method) is implemented and compared with modern machine learning methods BayesCard, DeepDB, FLAT on a special STATS test. The results of the experiments confirmed the effectiveness of the EVACAR method. The EVACAR method is more accurate or its maximum q-error is comparable to machine learning methods for 75-88 % of evaluated queries (subplans). In the future, it is planned to implement the 2nd method for assessing the cardinality of queries.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"24 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139168603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, a wide demand for the implementation and use of various cloud solutions is a modern trend and the driving force behind the development of network technologies. The growth of cloud application services delivered through data centers with varying network traffic needs demonstrates the limitations of traditional routing and load balancing methods. The combination of the advantages of software defined networks (SDN) technology and artificial intelligence (AI) methods ensures efficient management and operation of computer network resources. The paper proposes an approach to neural network multipath routing in SDN based on a genetic algorithm. The architecture and model of an artificial neural network has been developed to solve the problem of multipath routing in the SDN, which is able to predict the shortest paths based on the metrics of communication links. To optimize the hyperparameters of the neural network model, it is proposed to use a modified genetic algorithm. A visual software system SDNLoadBalancer has been developed and an experimental SDN topology has been designed, which makes it possible to study in detail the processes of neural network multipath routing in SDN based on the proposed approach. The obtained results show that the proposed neural network model has the ability to predict routes with high accuracy in real time, which makes it possible to implement various load balancing schemes in order to increase performance of SDN.
目前,实施和使用各种云解决方案的广泛需求是现代趋势,也是网络技术发展的驱动力。通过数据中心提供的云应用服务不断增长,对网络流量的需求也各不相同,这表明传统路由和负载平衡方法存在局限性。软件定义网络(SDN)技术和人工智能(AI)方法的优势相结合,确保了计算机网络资源的高效管理和运行。本文提出了一种基于遗传算法的 SDN 神经网络多路径路由方法。为了解决 SDN 中的多路径路由问题,本文开发了人工神经网络的架构和模型,它能够根据通信链路的指标预测最短路径。为了优化神经网络模型的超参数,建议使用改进的遗传算法。开发了一个可视化软件系统 SDNLoadBalancer,并设计了一个试验性 SDN 拓扑,从而可以详细研究基于所提方法的 SDN 中神经网络多路径路由过程。研究结果表明,所提出的神经网络模型能够实时、高精度地预测路由,从而可以实施各种负载平衡方案,提高 SDN 的性能。
{"title":"Neural Network Multipath Routing in Software Defined Networks Based on Genetic Algorithm","authors":"D. A. Perepelkin, M. Ivanchikova, V. T. Nguyen","doi":"10.17587/it.29.622-629","DOIUrl":"https://doi.org/10.17587/it.29.622-629","url":null,"abstract":"Currently, a wide demand for the implementation and use of various cloud solutions is a modern trend and the driving force behind the development of network technologies. The growth of cloud application services delivered through data centers with varying network traffic needs demonstrates the limitations of traditional routing and load balancing methods. The combination of the advantages of software defined networks (SDN) technology and artificial intelligence (AI) methods ensures efficient management and operation of computer network resources. The paper proposes an approach to neural network multipath routing in SDN based on a genetic algorithm. The architecture and model of an artificial neural network has been developed to solve the problem of multipath routing in the SDN, which is able to predict the shortest paths based on the metrics of communication links. To optimize the hyperparameters of the neural network model, it is proposed to use a modified genetic algorithm. A visual software system SDNLoadBalancer has been developed and an experimental SDN topology has been designed, which makes it possible to study in detail the processes of neural network multipath routing in SDN based on the proposed approach. The obtained results show that the proposed neural network model has the ability to predict routes with high accuracy in real time, which makes it possible to implement various load balancing schemes in order to increase performance of SDN.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139169582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of recognition of an underwater pipeline (UP) from stereo images using an autonomous underwater robot (AUV) is considered in relation to the initial position of tracking the UP, or to situations where the previous section of the UP is hidden by interference (submerged in the ground, hidden by algae, etc.). The final result of the identification of the UP section, visible by the stereo camera, is the calculation of its center line and the detection of the relative position of the AUV and UP in the camera coordinate system. The article proposes a recognition method based on the selection of visible UP boundaries (contours) on vectorized images of a stereopair. At the stage of vectorization, noise is eliminated, illumination is equalized, and the image is processed using the Canny method to obtain a binary image. The construction of UP contours is performed using the algorithm proposed by the authors, which is a modification of the Hough method. The main feature of the proposed algorithm is a relatively high performance due to a multiple reduction in the amount of information being processed. Reducing the volume of processed data is done by pre-sorting the line segments in the vectorized image, and by optimizing the computational scheme in the algorithm. The experiments also showed that the algorithm can detect the visible boundaries of the UP on blurry, non-contrasting images. The algorithmic basis of the method is described in detail, including: — search and construction of the most reliable UP boundaries using the method of the integral contribution of the line segments to the line formation; generation and selection of point features belonging to the surface of the UP (due to the constructed contours); calculation of the 3D direction of the center line; calculation of the center line of the visible section of UP; —calculation of the AUV position parameters relative to the UP required for the AUV control system. The centerline calculation is performed using the least squares method using point features belonging to the surface of the UP. The performed computational experiments on virtual scenes using the real texture of the seabed confirm the operability of the implemented approach and the possibility of its application for the inspection of underwater infrastructure.
利用自主水下机器人(AUV)从立体图像中识别水下管道(UP)的问题,与追踪水下管道的初始位置有关,或与水下管道的前一段被干扰(淹没在地下、被藻类等)隐藏的情况有关。识别立体摄像机所能看到的 UP 段的最终结果是计算其中心线,并检测 AUV 和 UP 在摄像机坐标系中的相对位置。文章提出了一种识别方法,该方法基于在立体图像的矢量化图像上选择可见的 UP 边界(轮廓线)。在矢量化阶段,消除噪声,均衡光照,并使用 Canny 方法处理图像,以获得二值图像。UP 轮廓的构建采用作者提出的算法,该算法是对 Hough 方法的修改。所提算法的主要特点是,由于处理的信息量成倍减少,因此性能相对较高。通过对矢量化图像中的线段进行预排序,以及优化算法中的计算方案,减少了处理的数据量。实验还表明,该算法可以在模糊、无对比度的图像上检测出 UP 的可见边界。本文详细介绍了该方法的算法基础,包括- 使用线段对线条形成的积分贡献法搜索和构建最可靠的 UP 边界;生成和选择属于 UP 表面的点特征(由于构建的轮廓);计算中心线的三维方向;计算 UP 可见部分的中心线;-计算 AUV 控制系统所需的相对于 UP 的 AUV 位置参数。中心线计算采用最小二乘法,利用 UP 表面的点特征进行计算。在使用真实海底纹理的虚拟场景上进行的计算实验证实了所实施方法的可操作性及其应用于水下基础设施检测的可能性。
{"title":"Algorithm for Recognizing an Underwater Pipeline from Stereo Images","authors":"V. Bobkov, M. A. Morozov, A. Shupikova","doi":"10.17587/it.29.639-649","DOIUrl":"https://doi.org/10.17587/it.29.639-649","url":null,"abstract":"The problem of recognition of an underwater pipeline (UP) from stereo images using an autonomous underwater robot (AUV) is considered in relation to the initial position of tracking the UP, or to situations where the previous section of the UP is hidden by interference (submerged in the ground, hidden by algae, etc.). The final result of the identification of the UP section, visible by the stereo camera, is the calculation of its center line and the detection of the relative position of the AUV and UP in the camera coordinate system. The article proposes a recognition method based on the selection of visible UP boundaries (contours) on vectorized images of a stereopair. At the stage of vectorization, noise is eliminated, illumination is equalized, and the image is processed using the Canny method to obtain a binary image. The construction of UP contours is performed using the algorithm proposed by the authors, which is a modification of the Hough method. The main feature of the proposed algorithm is a relatively high performance due to a multiple reduction in the amount of information being processed. Reducing the volume of processed data is done by pre-sorting the line segments in the vectorized image, and by optimizing the computational scheme in the algorithm. The experiments also showed that the algorithm can detect the visible boundaries of the UP on blurry, non-contrasting images. The algorithmic basis of the method is described in detail, including: — search and construction of the most reliable UP boundaries using the method of the integral contribution of the line segments to the line formation; generation and selection of point features belonging to the surface of the UP (due to the constructed contours); calculation of the 3D direction of the center line; calculation of the center line of the visible section of UP; —calculation of the AUV position parameters relative to the UP required for the AUV control system. The centerline calculation is performed using the least squares method using point features belonging to the surface of the UP. The performed computational experiments on virtual scenes using the real texture of the seabed confirm the operability of the implemented approach and the possibility of its application for the inspection of underwater infrastructure.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"301 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139170223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Abramov, A. Gonchar, A. V. Evseev, B. M. Shabanov
The paper is devoted to the issues of organizing the interaction of the new generation National Research Computer Network (NIKS) with individual Russian regional research and education telecommunications networks in the course of the development of NIKS within the framework of the national project "Science and Universities". A brief excursion into the history of the creation and development of academic networks within the territory of St. Petersburg is given. The main attention is paid to the organizational, methodological and technical aspects of the implementation of activities for the integration of the Regional Unified Computer Network for Education and Science of St. Petersburg (ROKSON) into NIKS.
{"title":"Experience in Integrating Regional Research and Education Networks to the National Research Computer Network of Russia on the Example of the ROKSON Network","authors":"A. Abramov, A. Gonchar, A. V. Evseev, B. M. Shabanov","doi":"10.17587/it.29.615-621","DOIUrl":"https://doi.org/10.17587/it.29.615-621","url":null,"abstract":"The paper is devoted to the issues of organizing the interaction of the new generation National Research Computer Network (NIKS) with individual Russian regional research and education telecommunications networks in the course of the development of NIKS within the framework of the national project \"Science and Universities\". A brief excursion into the history of the creation and development of academic networks within the territory of St. Petersburg is given. The main attention is paid to the organizational, methodological and technical aspects of the implementation of activities for the integration of the Regional Unified Computer Network for Education and Science of St. Petersburg (ROKSON) into NIKS.","PeriodicalId":504905,"journal":{"name":"Informacionnye Tehnologii","volume":"20 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139168359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}