Pub Date : 2024-07-04DOI: 10.1134/s1054661824700111
Bin Lei, Wei Wan, Artiom Nedzved, Alexei Belotserkovsky
Abstract
In this article, we formalize the problem of semiautomatic construction of the contour of area objects from satellite hyperspectral images and present a solution algorithm using PCA and Dijkstra’s algorithm. The contour is considered as the boundary of an object, which can be used for its segmentation and classification. The semiautomatic contour accepts reference points specified by the operator. The formalization of the algorithm is completed.
{"title":"Construction of a Semiautomatic Contour of Areal Objects on Hyperspectral Satellite Images","authors":"Bin Lei, Wei Wan, Artiom Nedzved, Alexei Belotserkovsky","doi":"10.1134/s1054661824700111","DOIUrl":"https://doi.org/10.1134/s1054661824700111","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this article, we formalize the problem of semiautomatic construction of the contour of area objects from satellite hyperspectral images and present a solution algorithm using PCA and Dijkstra’s algorithm. The contour is considered as the boundary of an object, which can be used for its segmentation and classification. The semiautomatic contour accepts reference points specified by the operator. The formalization of the algorithm is completed.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700019
S. V. Ablameyko, I. B. Gurevich, A. M. Nedzved, V. V. Yashina
Abstract
The main scientific results of the 16th International Conference on Pattern Recognition and Information Processing (PRIP-2023), Minsk, Republic of Belarus, October 2023, are reviewed and analyzed. The history of this series of conferences is outlined, and its significant role in the development of the theory and practice of image analysis, pattern recognition, and artificial intelligence is indicated. A list of articles in the special issue is provided, prepared from reports selected by the PRIP-2023 Program Committee.
{"title":"Some Scientific Results of the 16th International Conference PRIP-2023","authors":"S. V. Ablameyko, I. B. Gurevich, A. M. Nedzved, V. V. Yashina","doi":"10.1134/s1054661824700019","DOIUrl":"https://doi.org/10.1134/s1054661824700019","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The main scientific results of the 16th International Conference on Pattern Recognition and Information Processing (PRIP-2023), Minsk, Republic of Belarus, October 2023, are reviewed and analyzed. The history of this series of conferences is outlined, and its significant role in the development of the theory and practice of image analysis, pattern recognition, and artificial intelligence is indicated. A list of articles in the special issue is provided, prepared from reports selected by the PRIP-2023 Program Committee.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700093
I. B. Gurevich, V. V. Yashina
Abstract
The article presents an algebraic model for solving the problem of automation of ophthalmological diagnostics written in the language of descriptive image algebras. Descriptive image algebras are an initial mathematical language for formalizing and standardizing representations and procedures for processing image models and conversions over them when extracting information from images. To construct an algebraic model for solving the problem of automation of ophthalmological diagnostics, descriptive algebras of images with one ring are mainly used. This class of algebras belongs to the class of universal linear algebras with a sigma-associative ring with identity. A series of conversions and steps of the algebraic model are described using descriptive Boolean algebras over images. Descriptive image algebras are the main section of the mathematical apparatus of descriptive image analysis, which is a logically organized set of descriptive methods and models designed for image analysis and evaluation. The article defines specialized versions of descriptive image algebras with one ring and descriptive Boolean algebras over images, over models and representations of images, and over conversions of image models and images themselves, necessary for constructing an algebraic model. The image models (representations, formalized descriptions) used in writing the article are described. An example of a descriptive algorithmic scheme for solving an applied ophthalmological problem using an algebraic model is constructed.
{"title":"Automation of Eye Disease Diagnoses Using Descriptive Image Algebras and Boolean Algebra Methods","authors":"I. B. Gurevich, V. V. Yashina","doi":"10.1134/s1054661824700093","DOIUrl":"https://doi.org/10.1134/s1054661824700093","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The article presents an algebraic model for solving the problem of automation of ophthalmological diagnostics written in the language of descriptive image algebras. Descriptive image algebras are an initial mathematical language for formalizing and standardizing representations and procedures for processing image models and conversions over them when extracting information from images. To construct an algebraic model for solving the problem of automation of ophthalmological diagnostics, descriptive algebras of images with one ring are mainly used. This class of algebras belongs to the class of universal linear algebras with a sigma-associative ring with identity. A series of conversions and steps of the algebraic model are described using descriptive Boolean algebras over images. Descriptive image algebras are the main section of the mathematical apparatus of descriptive image analysis, which is a logically organized set of descriptive methods and models designed for image analysis and evaluation. The article defines specialized versions of descriptive image algebras with one ring and descriptive Boolean algebras over images, over models and representations of images, and over conversions of image models and images themselves, necessary for constructing an algebraic model. The image models (representations, formalized descriptions) used in writing the article are described. An example of a descriptive algorithmic scheme for solving an applied ophthalmological problem using an algebraic model is constructed.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700020
R. Abrahamyan, A. Belotserkovsky, P. Lukashevich, A. Gevorgyan, H. Grigoryan, H. Astsatryan
Abstract
The article introduces a scientific gateway to assess land surface temperatures using Landsat 8 and visible infrared imaging radiometer suite data. The gateway offers a selection of four temperature retrieval algorithms and two interpolation methods to create time series. The evaluation of the gateway’s performance in Armenia from May to October 2022 is illustrated. The research identifies the Price, Jiménez-Muñoz, McMillin, and I05 Chanel algorithms as the most accurate nighttime temperature estimation. Additionally, these products exhibit a reasonable level of accuracy, with an average root mean squared error ranging from 2.42 to 2.45°C and a coefficient of determination spanning from 0.82 to 0.95. The outcomes of this study bear significant relevance for diverse applications such as urban heat island analysis, environmental monitoring, and agricultural assessments.
{"title":"Scientific Gateway for Evaluating Land-Surface Temperatures Using Landsat 8 and Meteorological Data over Armenia and Belarus","authors":"R. Abrahamyan, A. Belotserkovsky, P. Lukashevich, A. Gevorgyan, H. Grigoryan, H. Astsatryan","doi":"10.1134/s1054661824700020","DOIUrl":"https://doi.org/10.1134/s1054661824700020","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The article introduces a scientific gateway to assess land surface temperatures using Landsat 8 and visible infrared imaging radiometer suite data. The gateway offers a selection of four temperature retrieval algorithms and two interpolation methods to create time series. The evaluation of the gateway’s performance in Armenia from May to October 2022 is illustrated. The research identifies the Price, Jiménez-Muñoz, McMillin, and I05 Chanel algorithms as the most accurate nighttime temperature estimation. Additionally, these products exhibit a reasonable level of accuracy, with an average root mean squared error ranging from 2.42 to 2.45°C and a coefficient of determination spanning from 0.82 to 0.95. The outcomes of this study bear significant relevance for diverse applications such as urban heat island analysis, environmental monitoring, and agricultural assessments.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700068
Huafeng Chen, Angelina Pashkevich, Shiping Ye, Rykhard Bohush, Sergey Ablameyko
Abstract
The paper proposes a new approach for crowd movement type estimation in video by combining convolutional neural network and integral optical flow. At first, main notions of crowd detection and tracking are given. Secondly, crowd movement features and parameters are defined. Three rules are proposed to identify direct crowd motion. Signs are presented for identifying chaotic crowd movement. Region movement indicators are introduced to analyze the movement of a group of people or a crowd. Thirdly, an algorithm of crowd movement types estimation using convolutional neural network and integral optical flow is proposed. We calculate crowd movement trajectories and show how they can be used to analyze behavior and divide crowds into groups of people. Experimental results show that with the help of convolutional neural network and integral optical flow crowd movement parameters can be calculated more accurately and quickly. The algorithm demonstrates stronger robustness to noise and the ability to get more accurate boundaries of moving objects.
{"title":"Crowd Movement Type Estimation in Video by Integral Optical Flow and Convolution Neural Network","authors":"Huafeng Chen, Angelina Pashkevich, Shiping Ye, Rykhard Bohush, Sergey Ablameyko","doi":"10.1134/s1054661824700068","DOIUrl":"https://doi.org/10.1134/s1054661824700068","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The paper proposes a new approach for crowd movement type estimation in video by combining convolutional neural network and integral optical flow. At first, main notions of crowd detection and tracking are given. Secondly, crowd movement features and parameters are defined. Three rules are proposed to identify direct crowd motion. Signs are presented for identifying chaotic crowd movement. Region movement indicators are introduced to analyze the movement of a group of people or a crowd. Thirdly, an algorithm of crowd movement types estimation using convolutional neural network and integral optical flow is proposed. We calculate crowd movement trajectories and show how they can be used to analyze behavior and divide crowds into groups of people. Experimental results show that with the help of convolutional neural network and integral optical flow crowd movement parameters can be calculated more accurately and quickly. The algorithm demonstrates stronger robustness to noise and the ability to get more accurate boundaries of moving objects.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s105466182470010x
V. V. Krasnoproshin, V. V. Matskevich
Abstract
The paper deals with a state-of-art applied problem related to the neural networks training. It is shown that, given the expansion of the range of practical problems, gradient methods do not always satisfy the conditions of the subject area, which contributes to the development of alternative training methods. An original training algorithm is proposed that implements the annealing method, for which convergence to the optimal solution is proven. A modified version of the algorithm has been developed that is invariant to the size of the training sample. Experimental studies (using the example of solving problems of image classification and color image compression) confirm the effectiveness of the proposed approach.
{"title":"Random Search in Neural Networks Training","authors":"V. V. Krasnoproshin, V. V. Matskevich","doi":"10.1134/s105466182470010x","DOIUrl":"https://doi.org/10.1134/s105466182470010x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The paper deals with a state-of-art applied problem related to the neural networks training. It is shown that, given the expansion of the range of practical problems, gradient methods do not always satisfy the conditions of the subject area, which contributes to the development of alternative training methods. An original training algorithm is proposed that implements the annealing method, for which convergence to the optimal solution is proven. A modified version of the algorithm has been developed that is invariant to the size of the training sample. Experimental studies (using the example of solving problems of image classification and color image compression) confirm the effectiveness of the proposed approach.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700135
V. V. Starovoitov, U. Yu. Akhundjanov
Abstract
A new solution to the problem of offline signature verification is presented. Digital images of signatures are processed and converted into a binary representation of a certain size. Then their contours are traced, and from them, two original features are calculated for describing the local structural features of the signature in the form of vectors of normalized frequency distributions of local binary pattern codes and values of local curvature of the signature contours. A new feature space is formed in which the pattern describes the proximity of pairs of signatures, and its coordinates are the rank correlation coefficients between the feature vectors of these signatures. In real practice, the expert has M (from 5 to 15) genuine signatures of a person; there are no forged signatures at all. On these M available genuine signatures of a single person, we train a one-class support vector machine model and obtain a single-writer-dependent classifier. A verifiable signature is considered forged if the classifier model considers it to be an outlier. The accuracy of our approach in verifying the genuineness of all 2640 signatures from the CEDAR database was 99.77%. All forged signatures in this database were correctly recognized.
摘要 针对离线签名验证问题提出了一种新的解决方案。签名的数字图像经过处理后转换成一定大小的二进制表示。然后对其轮廓进行追踪,并从中计算出两个原始特征,以局部二进制模式代码的归一化频率分布向量和签名轮廓的局部曲率值的形式来描述签名的局部结构特征。这样就形成了一个新的特征空间,其中的模式描述了成对签名的接近程度,其坐标则是这些签名特征向量之间的秩相关系数。在实际操作中,专家拥有一个人的 M 个(从 5 到 15 个)真实签名,没有任何伪造签名。在这 M 个可用的单人真实签名上,我们训练了一个单类支持向量机模型,并得到了一个依赖于单个作者的分类器。如果分类器模型认为一个可验证的签名是异常值,那么这个签名就被认为是伪造的。我们的方法验证 CEDAR 数据库中所有 2640 个签名真实性的准确率为 99.77%。该数据库中的所有伪造签名均被正确识别。
{"title":"A Writer-Dependent Approach to Offline Signature Verification Based on One-Class Support Vector Machine","authors":"V. V. Starovoitov, U. Yu. Akhundjanov","doi":"10.1134/s1054661824700135","DOIUrl":"https://doi.org/10.1134/s1054661824700135","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>A new solution to the problem of offline signature verification is presented. Digital images of signatures are processed and converted into a binary representation of a certain size. Then their contours are traced, and from them, two original features are calculated for describing the local structural features of the signature in the form of vectors of normalized frequency distributions of local binary pattern codes and values of local curvature of the signature contours. A new feature space is formed in which the pattern describes the proximity of pairs of signatures, and its coordinates are the rank correlation coefficients between the feature vectors of these signatures. In real practice, the expert has <i>M</i> (from 5 to 15) genuine signatures of a person; there are no forged signatures at all. On these <i>M</i> available genuine signatures of a single person, we train a one-class support vector machine model and obtain a single-writer-dependent classifier. A verifiable signature is considered forged if the classifier model considers it to be an outlier. The accuracy of our approach in verifying the genuineness of all 2640 signatures from the CEDAR database was 99.77%. All forged signatures in this database were correctly recognized.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700159
Xi Zhou, Qing Bu, Vadim Vladimirovich Matskevich, Alexander Mixailovich Nedzved
Abstract
The paper deals with a state-of-the-art applied problem related to the detection of landscape’s unnatural changes based on satellite images. An approach to constructing a detection system based on neural network processing of local terrain areas is proposed. As part of the approach, a neural network architecture and mechanisms for tuning to a specific area have been developed. It is shown that the use of neural networks and images corresponding to local areas (as initial data) provides easy expansion of the system to various types of terrain. The paper also presents a data filtering algorithm to adjust the balance of recall and overall precision of the system. Experimental studies have confirmed the effectiveness of the proposed approach.
{"title":"Detection System of Landscape’s Unnatural Changes by Satellite Images Based on Local Areas","authors":"Xi Zhou, Qing Bu, Vadim Vladimirovich Matskevich, Alexander Mixailovich Nedzved","doi":"10.1134/s1054661824700159","DOIUrl":"https://doi.org/10.1134/s1054661824700159","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The paper deals with a state-of-the-art applied problem related to the detection of landscape’s unnatural changes based on satellite images. An approach to constructing a detection system based on neural network processing of local terrain areas is proposed. As part of the approach, a neural network architecture and mechanisms for tuning to a specific area have been developed. It is shown that the use of neural networks and images corresponding to local areas (as initial data) provides easy expansion of the system to various types of terrain. The paper also presents a data filtering algorithm to adjust the balance of recall and overall precision of the system. Experimental studies have confirmed the effectiveness of the proposed approach.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes the noninvasive image egg growing monitoring method based on an illumination and transfer learning. During the egg growing, the size of egg air cell is increased. The segmentation is performed to extract cells and segmentation parameters are adjusted and trained on an air cell datasets by transfer learning to separate air cells with high light transmittance from the background. The improved DeepLabV3+ network model for image egg monitoring is proposed. The network embeds coordinate attention in the lightweight network MobilenetV2. The decoder feature fusion method is improved to a semantic embedding branch structure. The middle-level features that have been newly introduced are merged with the high-level features and low-level features. The results show that the mean intersection over union of the model reaches 89.06% and that the mean pixel accuracy rate reaches 94.66%. The method can effectively segment the air cell part of the eggs. The feasibility of the method was verified by measuring the air cells of egg growing process from the 7th to the 19th day.
{"title":"Monitoring of Egg Growing in Video by the Improved DeepLabv3+ Network Model","authors":"Fengyang Gu, Hui Zhu, Haiyang Wang, Yanbo Zhang, Fang Zuo, S. Ablameyko","doi":"10.1134/s1054661824700081","DOIUrl":"https://doi.org/10.1134/s1054661824700081","url":null,"abstract":"<p>The paper proposes the noninvasive image egg growing monitoring method based on an illumination and transfer learning. During the egg growing, the size of egg air cell is increased. The segmentation is performed to extract cells and segmentation parameters are adjusted and trained on an air cell datasets by transfer learning to separate air cells with high light transmittance from the background. The improved DeepLabV3+ network model for image egg monitoring is proposed. The network embeds coordinate attention in the lightweight network MobilenetV2. The decoder feature fusion method is improved to a semantic embedding branch structure. The middle-level features that have been newly introduced are merged with the high-level features and low-level features. The results show that the mean intersection over union of the model reaches 89.06% and that the mean pixel accuracy rate reaches 94.66%. The method can effectively segment the air cell part of the eggs. The feasibility of the method was verified by measuring the air cells of egg growing process from the 7th to the 19th day.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700044
Yu-Xiang Chen, A. M. Andrianov, A. V. Tuzikov
Abstract
In genome-wide association studies, combinations of single nucleotide polymorphisms are considered to be more effective than individual mutations in linking genes to traits. Clearly, finding the most relevant combinations from tens of thousands of these mutations associated with a trait is a complicated combinatorial problem. To achieve the higher prediction performance, improve computational efficiency and results interpretation, we proposed three algorithms for searching combinations of individual mutations and applied these algorithms to 3178 samples of Mycobacterium tuberculosis strains for predicting their drug resistance to 20 drugs. The single nucleotide polymorphisms associated with drug resistance were identified in the Mycobacterium tuberculosis genome using the single-marker test, and the combinations of individual mutations were searched using the multimarker test. The data were compared with those predicted by the widely recognized Mykrobe and TB-profiler software. Comparative analysis of the results obtained showed that, excepting for ofloxacin, the combinations of individual mutations found by our algorithms for the second-line drugs have some advantages in prediction accuracy.
{"title":"Identification of Mutation Combinations in Genome-Wide Association Studies: Application for Mycobacterium tuberculosis","authors":"Yu-Xiang Chen, A. M. Andrianov, A. V. Tuzikov","doi":"10.1134/s1054661824700044","DOIUrl":"https://doi.org/10.1134/s1054661824700044","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In genome-wide association studies, combinations of single nucleotide polymorphisms are considered to be more effective than individual mutations in linking genes to traits. Clearly, finding the most relevant combinations from tens of thousands of these mutations associated with a trait is a complicated combinatorial problem. To achieve the higher prediction performance, improve computational efficiency and results interpretation, we proposed three algorithms for searching combinations of individual mutations and applied these algorithms to 3178 samples of <i>Mycobacterium tuberculosis</i> strains for predicting their drug resistance to 20 drugs. The single nucleotide polymorphisms associated with drug resistance were identified in the <i>Mycobacterium tuberculosis</i> genome using the single-marker test, and the combinations of individual mutations were searched using the multimarker test. The data were compared with those predicted by the widely recognized Mykrobe and TB-profiler software. Comparative analysis of the results obtained showed that, excepting for ofloxacin, the combinations of individual mutations found by our algorithms for the second-line drugs have some advantages in prediction accuracy.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}