Pub Date : 2024-07-04DOI: 10.1134/s1054661824700123
Marina M. Lukashevich
Abstract
Diabetic retinopathy causes damage to the retina of the eye and leads to poor vision in patients with diabetes around the world. It affects the retina of a person’s eye, begins asymptomatically, and can lead to complete loss of vision. Screening for this disease can be done fairly quickly by using machine learning algorithms to analyze retinal images. Early diagnosis is crucial to prevent dangerous consequences such as blindness. This paper presents the results of implementation and comparison of ensemble machine learning algorithms and describes an approach to the selection of hyperparameters for solving screening problems (binary classification) and classifying the stage of diabetic retinopathy (from 0 to 4). Particular attention is paid to the approaches of searching for hyperparameters on a lattice and random search. This study uses a hyperparameter selection mechanism for ensemble algorithms based on a combination of grid search and random search approaches. The selection of hyperparameters, as well as the selection of informative features, made it possible to increase the accuracy of classification of retinal images. The experimental results showed an accuracy of 0.7531 for retinal image classification on the test dataset for the best model (gradient boosting, GB). When considering a binary classification (presence or absence of diabetic retinopathy), an accuracy of 0.9400 (gradient boosting, GB) was achieved.
{"title":"Diabetic Retinopathy Fundus Image Classification Using Ensemble Methods","authors":"Marina M. Lukashevich","doi":"10.1134/s1054661824700123","DOIUrl":"https://doi.org/10.1134/s1054661824700123","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Diabetic retinopathy causes damage to the retina of the eye and leads to poor vision in patients with diabetes around the world. It affects the retina of a person’s eye, begins asymptomatically, and can lead to complete loss of vision. Screening for this disease can be done fairly quickly by using machine learning algorithms to analyze retinal images. Early diagnosis is crucial to prevent dangerous consequences such as blindness. This paper presents the results of implementation and comparison of ensemble machine learning algorithms and describes an approach to the selection of hyperparameters for solving screening problems (binary classification) and classifying the stage of diabetic retinopathy (from 0 to 4). Particular attention is paid to the approaches of searching for hyperparameters on a lattice and random search. This study uses a hyperparameter selection mechanism for ensemble algorithms based on a combination of grid search and random search approaches. The selection of hyperparameters, as well as the selection of informative features, made it possible to increase the accuracy of classification of retinal images. The experimental results showed an accuracy of 0.7531 for retinal image classification on the test dataset for the best model (gradient boosting, GB). When considering a binary classification (presence or absence of diabetic retinopathy), an accuracy of 0.9400 (gradient boosting, GB) was achieved.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"17 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700056
Chaoxiang Chen, Aliaksandr Kroshchanka, Vladimir Golovko, Olha Golovko
Abstract
This paper proposes an approach to pruning the parameters of convolutional neural networks using unsupervised pretraining. The authors demonstrate that the proposed approach makes it possible to reduce the number of configurable parameters of a convolutional neural network without loss of generalization ability. A comparison of the proposed approach and existing pruning techniques is made. The capabilities of the proposed algorithm are demonstrated on classical CIFAR10 and CIFAR100 computer vision samples.
{"title":"An Approach to Pruning the Structure of Convolutional Neural Networks without Loss of Generalization Ability","authors":"Chaoxiang Chen, Aliaksandr Kroshchanka, Vladimir Golovko, Olha Golovko","doi":"10.1134/s1054661824700056","DOIUrl":"https://doi.org/10.1134/s1054661824700056","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>This paper proposes an approach to pruning the parameters of convolutional neural networks using unsupervised pretraining. The authors demonstrate that the proposed approach makes it possible to reduce the number of configurable parameters of a convolutional neural network without loss of generalization ability. A comparison of the proposed approach and existing pruning techniques is made. The capabilities of the proposed algorithm are demonstrated on classical CIFAR10 and CIFAR100 computer vision samples.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"23 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700147
Shiping Ye, Olga Nedzvedz, Chaoxiang Chen, Victor Anosov, Alexander Nedzved
Abstract
Tracking the movement and changes in the position of the human skeleton is a key element of algorithms for describing human pose. Detection of changes in the position of the skeleton allows one to obtain a lot of important information for orthopedic problems. This article proposes an algorithm for automatic estimation of walking motion on the basis of the reconstruction of the human skeleton and determination of the harmonic component of walking.
{"title":"Automatic Analysis of Walking Steps","authors":"Shiping Ye, Olga Nedzvedz, Chaoxiang Chen, Victor Anosov, Alexander Nedzved","doi":"10.1134/s1054661824700147","DOIUrl":"https://doi.org/10.1134/s1054661824700147","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Tracking the movement and changes in the position of the human skeleton is a key element of algorithms for describing human pose. Detection of changes in the position of the skeleton allows one to obtain a lot of important information for orthopedic problems. This article proposes an algorithm for automatic estimation of walking motion on the basis of the reconstruction of the human skeleton and determination of the harmonic component of walking.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"90 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s105466182470007x
Ana Gavrovska, Andreja Samčović, Dragi Dujković
Abstract
A growing research is focusing on approaches for assessing image quality as a result of advancements in digital imaging. Thus, there is an increasing demand for efficient no-reference image quality assessment methods, as many real-world, everyday applications lack distortion-free, i.e., pristine versions of images. This paper presents a new no-reference image quality outlier entropy perception evaluator method for the objective evaluation of real-world distorted images based on natural scene statistics and mean subtracted and contrast normalized coefficients. Distribution of the coefficients is found useful for no-reference image quality assessment, where their characteristics are investigated here. Moreover, entropies such as Shannon and approximate entropies are found suitable for quality estimation. Recent studies show perception-based approaches that demonstrate differences in correlation with subjective assessments. Similar variations are exhibited in entropy domain showing sample outliers compared to other distorted images with different distortion levels. In order to address these variations, this work presents outlier entropy perception evaluator model based on machine learning in order to describe the diversity of distortions affecting entropy and subjective scoring. Patch extraction is employed in the approach, where distortion level is estimated. The evaluatior model is found to be efficient presenting advantages using Shannon and approximate entropies and outlier detection over available perception-based image quality evaluators. The obtained results using proposed model show significant improvements in the correlation with human perceptual quality ratings.
{"title":"No-Reference Image Quality Assessment Based on Machine Learning and Outlier Entropy Samples","authors":"Ana Gavrovska, Andreja Samčović, Dragi Dujković","doi":"10.1134/s105466182470007x","DOIUrl":"https://doi.org/10.1134/s105466182470007x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>A growing research is focusing on approaches for assessing image quality as a result of advancements in digital imaging. Thus, there is an increasing demand for efficient no-reference image quality assessment methods, as many real-world, everyday applications lack distortion-free, i.e., pristine versions of images. This paper presents a new no-reference image quality outlier entropy perception evaluator method for the objective evaluation of real-world distorted images based on natural scene statistics and mean subtracted and contrast normalized coefficients. Distribution of the coefficients is found useful for no-reference image quality assessment, where their characteristics are investigated here. Moreover, entropies such as Shannon and approximate entropies are found suitable for quality estimation. Recent studies show perception-based approaches that demonstrate differences in correlation with subjective assessments. Similar variations are exhibited in entropy domain showing sample outliers compared to other distorted images with different distortion levels. In order to address these variations, this work presents outlier entropy perception evaluator model based on machine learning in order to describe the diversity of distortions affecting entropy and subjective scoring. Patch extraction is employed in the approach, where distortion level is estimated. The evaluatior model is found to be efficient presenting advantages using Shannon and approximate entropies and outlier detection over available perception-based image quality evaluators. The obtained results using proposed model show significant improvements in the correlation with human perceptual quality ratings.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"90 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1134/s1054661824700032
Qing Bu, Wei Wan, Ivan Leonov
Abstract
Image inpainting is the process of filling in missing or damaged areas of images. In recent years, this area has received significant development, mainly owing to machine learning methods. Generative adversarial networks are a powerful tool for creating synthetic images. They are trained to create images similar to the original dataset. The use of such neural networks is not limited to creating realistic images. In areas where privacy is important, such as healthcare or finance, they help generate synthetic data that preserves the overall structure and statistical characteristics, but does not contain the sensitive information of individuals. However, direct use of this architecture will result in the generation of a completely new image. In the case where it is possible to indicate the location of confidential information on an image, it is advisable to use image inpainting in order to replace only the secret information with synthetic information. This paper discusses key approaches to solving this problem, as well as corresponding neural network architectures. Questions are also raised about the use of these algorithms to protect confidential image information, as well as the possibility of using these models when developing new applications.
{"title":"Image Inpainting by Machine Learning Algorithms","authors":"Qing Bu, Wei Wan, Ivan Leonov","doi":"10.1134/s1054661824700032","DOIUrl":"https://doi.org/10.1134/s1054661824700032","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Image inpainting is the process of filling in missing or damaged areas of images. In recent years, this area has received significant development, mainly owing to machine learning methods. Generative adversarial networks are a powerful tool for creating synthetic images. They are trained to create images similar to the original dataset. The use of such neural networks is not limited to creating realistic images. In areas where privacy is important, such as healthcare or finance, they help generate synthetic data that preserves the overall structure and statistical characteristics, but does not contain the sensitive information of individuals. However, direct use of this architecture will result in the generation of a completely new image. In the case where it is possible to indicate the location of confidential information on an image, it is advisable to use image inpainting in order to replace only the secret information with synthetic information. This paper discusses key approaches to solving this problem, as well as corresponding neural network architectures. Questions are also raised about the use of these algorithms to protect confidential image information, as well as the possibility of using these models when developing new applications.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"53 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1134/s1054661824010103
T. Kosovskaya, Juan Zhou
Abstract
When solving artificial intelligence problems related to the study of complex structured objects, a convenient tool for describing such objects is the language of predicate calculus. The paper presents two algorithms for checking the isomorphism of pairs of elementary conjunctions of predicate formulas (they coincide up to variable names and the order of conjunctive terms). The first of the algorithms checks elementary conjunctions containing a single predicate symbol for isomorphism. Furthermore, if the formulas are isomorphic, it finds a one-to-one correspondence between the arguments of these formulas. If all predicates are binary, the proposed algorithm is an algorithm for checking two directed graphs for isomorphism. The second algorithm checks elementary conjunctions containing multiple predicate symbols for isomorphism. Estimates of their time complexity are given for both algorithms.
{"title":"Algorithms of Isomorphism of Elementary Conjunctions Checking","authors":"T. Kosovskaya, Juan Zhou","doi":"10.1134/s1054661824010103","DOIUrl":"https://doi.org/10.1134/s1054661824010103","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>When solving artificial intelligence problems related to the study of complex structured objects, a convenient tool for describing such objects is the language of predicate calculus. The paper presents two algorithms for checking the isomorphism of pairs of elementary conjunctions of predicate formulas (they coincide up to variable names and the order of conjunctive terms). The first of the algorithms checks elementary conjunctions containing a single predicate symbol for isomorphism. Furthermore, if the formulas are isomorphic, it finds a one-to-one correspondence between the arguments of these formulas. If all predicates are binary, the proposed algorithm is an algorithm for checking two directed graphs for isomorphism. The second algorithm checks elementary conjunctions containing multiple predicate symbols for isomorphism. Estimates of their time complexity are given for both algorithms.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1134/s1054661824010152
Narek Papyan, Michel Kulhandjian, Hovannes Kulhandjian, Levon Aslanyan
Abstract
In this survey we are focusing on utilizing drone-based systems for the detection of individuals, particularly by identifying human screams and other distress signals. This study has significant relevance in post-disaster scenarios, including events such as earthquakes, hurricanes, military conflicts, wildfires, and more. These drones are capable of hovering over disaster-stricken areas that may be challenging for rescue teams to access directly, enabling them to pinpoint potential locations where people might be trapped. Drones can cover larger areas in shorter timeframes compared to ground-based rescue efforts or even specially trained search dogs. Unmanned aerial vehicles (UAVs), commonly referred to as drones, are frequently deployed for search-and-rescue missions during disaster situations. Typically, drones capture aerial images to assess structural damage and identify the extent of the disaster. They also employ thermal imaging technology to detect body heat signatures, which can help locate individuals. In some cases, larger drones are used to deliver essential supplies to people stranded in isolated disaster-stricken areas. In our discussions, we delve into the unique challenges associated with locating humans through aerial acoustics. The auditory system must distinguish between human cries and sounds that occur naturally, such as animal calls and wind. Additionally, it should be capable of recognizing distinct patterns related to signals like shouting, clapping, or other ways in which people attempt to signal rescue teams. To tackle this challenge, one solution involves harnessing artificial intelligence (AI) to analyze sound frequencies and identify common audio “signatures.” Deep learning-based networks, such as convolutional neural networks (CNNs), can be trained using these signatures to filter out noise generated by drone motors and other environmental factors. Furthermore, employing signal processing techniques like the direction of arrival (DOA) based on microphone array signals can enhance the precision of tracking the source of human noises.
{"title":"AI-Based Drone Assisted Human Rescue in Disaster Environments: Challenges and Opportunities","authors":"Narek Papyan, Michel Kulhandjian, Hovannes Kulhandjian, Levon Aslanyan","doi":"10.1134/s1054661824010152","DOIUrl":"https://doi.org/10.1134/s1054661824010152","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this survey we are focusing on utilizing drone-based systems for the detection of individuals, particularly by identifying human screams and other distress signals. This study has significant relevance in post-disaster scenarios, including events such as earthquakes, hurricanes, military conflicts, wildfires, and more. These drones are capable of hovering over disaster-stricken areas that may be challenging for rescue teams to access directly, enabling them to pinpoint potential locations where people might be trapped. Drones can cover larger areas in shorter timeframes compared to ground-based rescue efforts or even specially trained search dogs. Unmanned aerial vehicles (UAVs), commonly referred to as drones, are frequently deployed for search-and-rescue missions during disaster situations. Typically, drones capture aerial images to assess structural damage and identify the extent of the disaster. They also employ thermal imaging technology to detect body heat signatures, which can help locate individuals. In some cases, larger drones are used to deliver essential supplies to people stranded in isolated disaster-stricken areas. In our discussions, we delve into the unique challenges associated with locating humans through aerial acoustics. The auditory system must distinguish between human cries and sounds that occur naturally, such as animal calls and wind. Additionally, it should be capable of recognizing distinct patterns related to signals like shouting, clapping, or other ways in which people attempt to signal rescue teams. To tackle this challenge, one solution involves harnessing artificial intelligence (AI) to analyze sound frequencies and identify common audio “signatures.” Deep learning-based networks, such as convolutional neural networks (CNNs), can be trained using these signatures to filter out noise generated by drone motors and other environmental factors. Furthermore, employing signal processing techniques like the direction of arrival (DOA) based on microphone array signals can enhance the precision of tracking the source of human noises.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"18 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140590589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1134/s1054661824010061
G. Giorgobiani, V. Kvaratskhelia, M. Menteshashvili
Abstract
In this paper we explore the basic properties of sub-Gaussian random variables and random elements. We also present various notions of subgaussianity (weak, ({mathbf{T}})- and ({mathbf{F}})-subgaussianity) of random elements with values in general Banach spaces. It is shown that the covariance operator of ({mathbf{T}})-subgaussian random element is Gaussian and some consequences of this result in spaces possessing certain geometric properties are noted. Moreover, the almost sure (a.s.) unconditional convergence of random series are considered and a sufficient condition of a.s. unconditional convergence of a random series of a special type with values in a Banach space with some geometric properties is proved. By the a.s. unconditional convergence of random series we understand the convergence of all rearrangements of the series on the same set of probability 1. With some effort, we prove one of the main results of the paper, which gives us a necessary condition for the a.s. unconditional convergence of random series of a special type in a general Banach space. For the proof, a lemma is used that establishes a connection between the moments of a random variable and which may be of independent interest.
{"title":"Unconditional Convergence of Sub-Gaussian Random Series","authors":"G. Giorgobiani, V. Kvaratskhelia, M. Menteshashvili","doi":"10.1134/s1054661824010061","DOIUrl":"https://doi.org/10.1134/s1054661824010061","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this paper we explore the basic properties of sub-Gaussian random variables and random elements. We also present various notions of subgaussianity (weak, <span>({mathbf{T}})</span>- and <span>({mathbf{F}})</span>-subgaussianity) of random elements with values in general Banach spaces. It is shown that the covariance operator of <span>({mathbf{T}})</span>-subgaussian random element is Gaussian and some consequences of this result in spaces possessing certain geometric properties are noted. Moreover, the almost sure (a.s.) unconditional convergence of random series are considered and a sufficient condition of a.s. unconditional convergence of a random series of a special type with values in a Banach space with some geometric properties is proved. By the a.s. unconditional convergence of random series we understand the convergence of all rearrangements of the series on the same set of probability 1. With some effort, we prove one of the main results of the paper, which gives us a necessary condition for the a.s. unconditional convergence of random series of a special type in a general Banach space. For the proof, a lemma is used that establishes a connection between the moments of a random variable and which may be of independent interest.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"47 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140590707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1134/s1054661824010188
Vahe Sargsyan
Abstract
Let (G) be an Abelian group of order n, let (k geqslant 2) be an integer, and ({{A}_{1}}, ldots ,{{A}_{k}}) be nonempty subsets of (G). The collection (left( {{{A}_{1}}, ldots ,{{A}_{k}}} right)) is called (k)-sum-free (abbreviated (k)-SFC) if the equation ({{x}_{1}} + ldots + {{x}_{k}} = 0) has no solutions in the collection (left( {{{A}_{1}}, ldots ,{{A}_{k}}} right),) where ({{x}_{1}} in {{A}_{1}}), …, ({{x}_{k}} in {{A}_{k}}). The family of (k)-SFC in (G) will be denoted by (SF{{C}_{k}}left( G right)). The collection (left( {{{A}_{1}}, ldots ,{{A}_{k}}} right) in SF{{C}_{k}}left( G right)) is called maximal by capacity if it is maximal by the sum of (left| {{{A}_{1}}} right| + ldots + left| {{{A}_{k}}} right|), and maximal by inclusion if for any (i in left{ {1,...,k} right}) and (x in G{kern 1pt} {{backslash }}{kern 1pt} {{A}_{i}},) the collection (left( {{{A}_{1}},...,{{A}_{{i - 1}}},{{A}_{i}} cup left{ x right},{{A}_{{i + 1}}},...,{{A}_{k}}} right))( notin )(SF{{C}_{k}}left( G right).) Suppose ({{varrho }_{k}}left( G right) = left| {{{A}_{1}}} right| + ldots + left| {{{A}_{k}}} right|.) In this work, we study the problem of the maximal value of ({{varrho }_{k}}left( G right)). In particular, the maximal value of ({{varrho }_{k}}left( {{{Z}_{d}}} right)) for the cyclic group ({{Z}_{d}}) is determined. Upper and lower bounds for ({{varrho }_{k}}left( G right)) are obtained for the Abelian group (G.) The structure of the maximal k-sum-free collection by capacity (by inclusion) is described for an arbitrary cyclic group.
AbstractLet (G) be an Abelian group of order n, let (k geqslant 2) be an integer, and ({{A}_{1}}, ldots ,{{A}_{k}}) be nonempty subsets of (G).如果方程 ({{x}_{1}} + ldots + {{x}_{k}} = 0) 在集合 (left( {{A}_{1}}、ldots ,{{A}_{k}}} right),) where ({{x}_{1}} in {{A}_{1}}), ..., ({{x}_{k}} in {{A}_{k}}).在 (G) 中的(k)-SFC 族将用(SF{{C}_{k}}left( Gright)) 表示。如果 SF{{C}_{k}}left( {{A}_{1}}, ldots ,{{A}_{k}}} right) 中的集合 (left( {{A}_{1}}, ldots ,{{A}_{k}}} right) )是 (left| {{A}_{1}}} right| + ldots + left| {{A}_{k}}} right|) 的和的最大值,那么这个集合就称为容量最大集合、并且如果对于任何 (i in left{{1,....,k})和(x 在 G{kern 1pt} {{backslash }}{{kern 1pt} {{A}_{i}}, )的集合 (left( {{A}_{1}},...,{{A}_{i - 1}}}},{{A}_{i}})cup left{ x right},{{A}_{i + 1}}},...,{{A}_{k}}}})。right)( notin) (SF{{C}_{k}}}left( G right).)Suppose ({{varrho }_{k}}left( G right) = left| {{{A}_{1}}}right| + ldots + left| {{A}_{k}}}right|.)在这项工作中,我们将研究 ({{varrho }_{k}}left( G right)) 的最大值问题。特别是确定了循环群 ({{Z}_{d}}) 的 ({{varrho }_{k}}left( {{{Z}_{d}}} right)) 的最大值。对于阿贝尔群 (G.),得到了 ({{varrho }_{k}}left( {{{Z}_{d}}} right)) 的上界和下界。 对于任意循环群,通过容量(通过包含)描述了最大无 k 和集合的结构。
{"title":"Maximal k-Sum-Free Collections in an Abelian Group","authors":"Vahe Sargsyan","doi":"10.1134/s1054661824010188","DOIUrl":"https://doi.org/10.1134/s1054661824010188","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Let <span>(G)</span> be an Abelian group of order <i>n</i>, let <span>(k geqslant 2)</span> be an integer, and <span>({{A}_{1}}, ldots ,{{A}_{k}})</span> be nonempty subsets of <span>(G)</span>. The collection <span>(left( {{{A}_{1}}, ldots ,{{A}_{k}}} right))</span> is called <span>(k)</span>-sum-free (abbreviated <span>(k)</span>-<i>SFC</i>) if the equation <span>({{x}_{1}} + ldots + {{x}_{k}} = 0)</span> has no solutions in the collection <span>(left( {{{A}_{1}}, ldots ,{{A}_{k}}} right),)</span> where <span>({{x}_{1}} in {{A}_{1}})</span>, …, <span>({{x}_{k}} in {{A}_{k}})</span>. The family of <span>(k)</span>-<i>SFC</i> in <span>(G)</span> will be denoted by <span>(SF{{C}_{k}}left( G right))</span>. The collection <span>(left( {{{A}_{1}}, ldots ,{{A}_{k}}} right) in SF{{C}_{k}}left( G right))</span> is called maximal by capacity if it is maximal by the sum of <span>(left| {{{A}_{1}}} right| + ldots + left| {{{A}_{k}}} right|)</span>, and maximal by inclusion if for any <span>(i in left{ {1,...,k} right})</span> and <span>(x in G{kern 1pt} {{backslash }}{kern 1pt} {{A}_{i}},)</span> the collection <span>(left( {{{A}_{1}},...,{{A}_{{i - 1}}},{{A}_{i}} cup left{ x right},{{A}_{{i + 1}}},...,{{A}_{k}}} right))</span> <span>( notin )</span> <span>(SF{{C}_{k}}left( G right).)</span> Suppose <span>({{varrho }_{k}}left( G right) = left| {{{A}_{1}}} right| + ldots + left| {{{A}_{k}}} right|.)</span> In this work, we study the problem of the maximal value of <span>({{varrho }_{k}}left( G right))</span>. In particular, the maximal value of <span>({{varrho }_{k}}left( {{{Z}_{d}}} right))</span> for the cyclic group <span>({{Z}_{d}})</span> is determined. Upper and lower bounds for <span>({{varrho }_{k}}left( G right))</span> are obtained for the Abelian group <span>(G.)</span> The structure of the maximal <i>k</i>-sum-free collection by capacity (by inclusion) is described for an arbitrary cyclic group.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"43 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140590808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1134/s105466182401005x
S. Kh. Darbinyan
Abstract
In this note we prove: let D be a 2-strong digraph of order (n) such that its (n - 1) vertices have degrees at least (n + k) and the remaining vertex (z) has degree at least (n - k - 4,) where (k) is a nonnegative integer. If (D) contains a cycle of length at least (n - k - 2) passing through (z,) then (D) is Hamiltonian. This result is best possible in some sense.
Abstract 在这篇笔记中我们证明:让D是一个阶为(n)的二强图,使得它的(n - 1) 顶点至少有(n + k) 度,剩下的顶点(z) 至少有(n - k - 4,)度,其中(k)是一个非负整数。如果(D)包含一个长度至少为(n - k - 2) 经过(z,)的循环,那么(D)就是哈密顿的。这个结果在某种意义上是最好的
{"title":"On Hamiltonian Cycles in a 2-Strong Digraphs with Large Degrees and Cycles","authors":"S. Kh. Darbinyan","doi":"10.1134/s105466182401005x","DOIUrl":"https://doi.org/10.1134/s105466182401005x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this note we prove: let <i>D</i> be a 2-strong digraph of order <span>(n)</span> such that its <span>(n - 1)</span> vertices have degrees at least <span>(n + k)</span> and the remaining vertex <span>(z)</span> has degree at least <span>(n - k - 4,)</span> where <span>(k)</span> is a nonnegative integer. If <span>(D)</span> contains a cycle of length at least <span>(n - k - 2)</span> passing through <span>(z,)</span> then <span>(D)</span> is Hamiltonian. This result is best possible in some sense.</p>","PeriodicalId":35400,"journal":{"name":"PATTERN RECOGNITION AND IMAGE ANALYSIS","volume":"4 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140590435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}