Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060085
V. B. Kotov, Z. B. Sokhova
Being necessary components of large smart systems (including the brain), commutators can be realized on the basis of a resistor array with variable resistors. The paper considers some switching (commutating) capabilities of the resistor array. A switching graph is used to describe the work of the resistor array. This sort of graph provides a visual representation of generated high-conductivity current flow channels. A two-terminal scheme is used to generate the switching graph. In the scheme a voltage is supplies to a particular couple of poles (conductors), other poles being isolated from the power sources. Changing couples of poles makes it possible to generate a series of switching graphs. We demonstrate the possibility to create an interconnection between two or more blocks connected to the appropriate poles of the array. To do this, the resistor array must have a suitable signature (resistor directions), the applied voltage must match the signature. The series we generate are defined by not only control signals, but also the prehistory of the resistor array. Given preset resistor characteristics, the competition between graph edges plays an important role in that it contributes to the thinning of the switching graph we generate.
{"title":"Resistor Array as a Commutator","authors":"V. B. Kotov, Z. B. Sokhova","doi":"10.3103/S1060992X23060085","DOIUrl":"10.3103/S1060992X23060085","url":null,"abstract":"<p>Being necessary components of large smart systems (including the brain), commutators can be realized on the basis of a resistor array with variable resistors. The paper considers some switching (commutating) capabilities of the resistor array. A switching graph is used to describe the work of the resistor array. This sort of graph provides a visual representation of generated high-conductivity current flow channels. A two-terminal scheme is used to generate the switching graph. In the scheme a voltage is supplies to a particular couple of poles (conductors), other poles being isolated from the power sources. Changing couples of poles makes it possible to generate a series of switching graphs. We demonstrate the possibility to create an interconnection between two or more blocks connected to the appropriate poles of the array. To do this, the resistor array must have a suitable signature (resistor directions), the applied voltage must match the signature. The series we generate are defined by not only control signals, but also the prehistory of the resistor array. Given preset resistor characteristics, the competition between graph edges plays an important role in that it contributes to the thinning of the switching graph we generate.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S226 - S236"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X2306005X
N. Filatov, M. Kindulov
Unsupervised domain adaptation plays a crucial role in semantic segmentation tasks due to the high cost of annotating data. Existing approaches often rely on large transformer models and momentum networks to stabilize and improve the self-training process. In this study, we investigate the applicability of low-rank adaptation (LoRA) to domain adaptation in computer vision. Our focus is on the unsupervised domain adaptation task of semantic segmentation, which requires adapting models from a synthetic dataset (GTA5) to a real-world dataset (City-scapes). We employ the Swin Transformer as the feature extractor and TransDA domain adaptation framework. Through experiments, we demonstrate that LoRA effectively stabilizes the self-training process, achieving similar training dynamics to the exponentially moving average (EMA) mechanism. Moreover, LoRA provides comparable metrics to EMA under the same limited computation budget. In GTA5 → Cityscapes experiments, the adaptation pipeline with LoRA achieves a mIoU of 0.515, slightly surpassing the EMA baseline’s mIoU of 0.513, while also offering an 11% speedup in training time and video memory saving. These re-sults highlight LoRA as a promising approach for domain adaptation in computer vision, offering a viable alternative to momentum networks which also saves computational resources.
{"title":"Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers","authors":"N. Filatov, M. Kindulov","doi":"10.3103/S1060992X2306005X","DOIUrl":"10.3103/S1060992X2306005X","url":null,"abstract":"<p>Unsupervised domain adaptation plays a crucial role in semantic segmentation tasks due to the high cost of annotating data. Existing approaches often rely on large transformer models and momentum networks to stabilize and improve the self-training process. In this study, we investigate the applicability of low-rank adaptation (LoRA) to domain adaptation in computer vision. Our focus is on the unsupervised domain adaptation task of semantic segmentation, which requires adapting models from a synthetic dataset (GTA5) to a real-world dataset (City-scapes). We employ the Swin Transformer as the feature extractor and TransDA domain adaptation framework. Through experiments, we demonstrate that LoRA effectively stabilizes the self-training process, achieving similar training dynamics to the exponentially moving average (EMA) mechanism. Moreover, LoRA provides comparable metrics to EMA under the same limited computation budget. In GTA5 → Cityscapes experiments, the adaptation pipeline with LoRA achieves a mIoU of 0.515, slightly surpassing the EMA baseline’s mIoU of 0.513, while also offering an 11% speedup in training time and video memory saving. These re-sults highlight LoRA as a promising approach for domain adaptation in computer vision, offering a viable alternative to momentum networks which also saves computational resources.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S277 - S283"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060103
S. A. Linok, D. A. Yudin
We present an analysis of a self-supervised learning approach for monocular depth and ego-motion estimation. This is an important problem for computer vision systems of robots, autonomous vehicles and other intelligent agents, equipped only with monocular camera sensor. We have explored a number of neural network architectures that perform single-frame depth and multi-frame camera pose predictions to minimize photometric error between consecutive frames on a sequence of camera images. Unlike other existing works, our proposed approach called ERF-SfMLearner examines the influence of the deep neural network receptive field on the performance of depth and ego-motion estimation. To do this, we study the modification of network layers with two convolution operators with extended receptive field: dilated and deformable convolutions. We demonstrate on the KITTI dataset that increasing the receptive field leads to better metrics and lower errors both in terms of depth and ego-motion estimation. Code is publicly available at github.com/linukc/ERF-SfMLearner.
{"title":"Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation","authors":"S. A. Linok, D. A. Yudin","doi":"10.3103/S1060992X23060103","DOIUrl":"10.3103/S1060992X23060103","url":null,"abstract":"<p>We present an analysis of a self-supervised learning approach for monocular depth and ego-motion estimation. This is an important problem for computer vision systems of robots, autonomous vehicles and other intelligent agents, equipped only with monocular camera sensor. We have explored a number of neural network architectures that perform single-frame depth and multi-frame camera pose predictions to minimize photometric error between consecutive frames on a sequence of camera images. Unlike other existing works, our proposed approach called ERF-SfMLearner examines the influence of the deep neural network receptive field on the performance of depth and ego-motion estimation. To do this, we study the modification of network layers with two convolution operators with extended receptive field: dilated and deformable convolutions. We demonstrate on the KITTI dataset that increasing the receptive field leads to better metrics and lower errors both in terms of depth and ego-motion estimation. Code is publicly available at github.com/linukc/ERF-SfMLearner.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S206 - S213"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3103/S1060992X23060103.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060036
G. N. Chugreeva, O. E. Sarmanova, K. A. Laptinskiy, S. A. Burikov, T. A. Dolenko
The paper presents results of the use of convolutional neural networks for the development of a multimodal photoluminescent nanosensor based on carbon dots (CD) for simultaneous measurement of the number of parameters of multicomponent liquid media. It is shown that using 2D convolutional neural networks allows to determine the concentrations of heavy metal cations Cu2+, Ni2+, Cr3+, ({text{NO}}_{3}^{ - }) anions and pH value of aqueous solutions with a mean absolute error of 0.29, 0.96, 0.22, 1.82 and 0.05 mM, respectively. The resulting errors satisfy the needs of monitoring the composition of technological and industrial waters.
{"title":"Application of Convolutional Neural Networks for Creation of Photoluminescent Carbon Nanosensor for Heavy Metals Detection","authors":"G. N. Chugreeva, O. E. Sarmanova, K. A. Laptinskiy, S. A. Burikov, T. A. Dolenko","doi":"10.3103/S1060992X23060036","DOIUrl":"10.3103/S1060992X23060036","url":null,"abstract":"<p>The paper presents results of the use of convolutional neural networks for the development of a multimodal photoluminescent nanosensor based on carbon dots (CD) for simultaneous measurement of the number of parameters of multicomponent liquid media. It is shown that using 2D convolutional neural networks allows to determine the concentrations of heavy metal cations Cu<sup>2+</sup>, Ni<sup>2+</sup>, Cr<sup>3+</sup>, <span>({text{NO}}_{3}^{ - })</span> anions and pH value of aqueous solutions with a mean absolute error of 0.29, 0.96, 0.22, 1.82 and 0.05 mM, respectively. The resulting errors satisfy the needs of monitoring the composition of technological and industrial waters.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S244 - S251"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060061
I. A. Grishin, T. Y. Krutov, A. I. Kanev, V. I. Terekhov
The study of the forest structure makes it possible to solve many important problems of forest inventory. LiDAR scanning is one of the most widely used methods for obtaining information about a forest area today. To calculate the structural parameters of plantations, a reliable segmentation of the initial data is required, the quality of segmentation can be difficult to assess in conditions of large volumes of forest areas. For this purpose, in this work, a system of correctness and quality of segmentation was developed using deep learning models. Segmentation was carried out on a forest area with a high planting density, using a phased segmentation of layers using the DBSCAN method with preliminary detection of planting coordinates and partitioning the plot using a Voronoi diagram. The correctness model was trained and tested on the extracted data of individual trees on the PointNet ++ and CurveNet neural networks, and good model accuracies were obtained in 89 and 88%, respectively, and are proposed to use the quality assessment of clustering methods, as well as improve the quality of LiDAR data segmentation on separate point clouds of forest plantations by detecting frequently occurring segmentation defects.
{"title":"Individual Tree Segmentation Quality Evaluation Using Deep Learning Models LiDAR Based","authors":"I. A. Grishin, T. Y. Krutov, A. I. Kanev, V. I. Terekhov","doi":"10.3103/S1060992X23060061","DOIUrl":"10.3103/S1060992X23060061","url":null,"abstract":"<p>The study of the forest structure makes it possible to solve many important problems of forest inventory. LiDAR scanning is one of the most widely used methods for obtaining information about a forest area today. To calculate the structural parameters of plantations, a reliable segmentation of the initial data is required, the quality of segmentation can be difficult to assess in conditions of large volumes of forest areas. For this purpose, in this work, a system of correctness and quality of segmentation was developed using deep learning models. Segmentation was carried out on a forest area with a high planting density, using a phased segmentation of layers using the DBSCAN method with preliminary detection of planting coordinates and partitioning the plot using a Voronoi diagram. The correctness model was trained and tested on the extracted data of individual trees on the PointNet ++ and CurveNet neural networks, and good model accuracies were obtained in 89 and 88%, respectively, and are proposed to use the quality assessment of clustering methods, as well as improve the quality of LiDAR data segmentation on separate point clouds of forest plantations by detecting frequently occurring segmentation defects.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S270 - S276"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060127
A. Yu. Tiumentsev, Yu. V. Tiumentsev
Motion control of modern and advanced aircraft has to be provided under conditions of incomplete and inaccurate knowledge of their parameters and characteristics, possible flight regimes, and environmental influences. In addition, a variety of abnormal situations may arise during flight, in particular, equipment failures and structural damage. The control system must be able to adapt to these changes by adjusting the control laws in use. The tools of the adaptive control allows us to meet this requirement. One of the effective approaches to the implementation of adaptivity concepts is the approach based on methods and tools of neural network modeling and control. In this case, a fairly common option in solving such problems is the use of recurrent neural networks, in particular, networks of NARX and NARMAX type. However, in a number of cases, in particular for control objects with complicated dynamic properties, this approach is ineffective. As a possible alternative, it is proposed to consider deep neural networks used both for modeling of dynamical systems and for their control. The capabilities of this approach are demonstrated on the example of a real applied problem, in which the control law of longitudinal angular motion of a supersonic passenger airplane is synthesized. The results obtained allow us to evaluate the effectiveness of the proposed approach, including the case of failure situations.
{"title":"Motion Control of Supersonic Passenger Aircraft Using Machine Learning Methods","authors":"A. Yu. Tiumentsev, Yu. V. Tiumentsev","doi":"10.3103/S1060992X23060127","DOIUrl":"10.3103/S1060992X23060127","url":null,"abstract":"<p>Motion control of modern and advanced aircraft has to be provided under conditions of incomplete and inaccurate knowledge of their parameters and characteristics, possible flight regimes, and environmental influences. In addition, a variety of abnormal situations may arise during flight, in particular, equipment failures and structural damage. The control system must be able to adapt to these changes by adjusting the control laws in use. The tools of the adaptive control allows us to meet this requirement. One of the effective approaches to the implementation of adaptivity concepts is the approach based on methods and tools of neural network modeling and control. In this case, a fairly common option in solving such problems is the use of recurrent neural networks, in particular, networks of NARX and NARMAX type. However, in a number of cases, in particular for control objects with complicated dynamic properties, this approach is ineffective. As a possible alternative, it is proposed to consider deep neural networks used both for modeling of dynamical systems and for their control. The capabilities of this approach are demonstrated on the example of a real applied problem, in which the control law of longitudinal angular motion of a supersonic passenger airplane is synthesized. The results obtained allow us to evaluate the effectiveness of the proposed approach, including the case of failure situations.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S195 - S205"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060073
A. M. Korsakov, T. T. Isakov, A. V. Bakhshiev
The article presents a method for implementing incremental learning on a compartmental spiking neuron model. The training of one neuron with the possibility of forming new classes was chosen as an incremental learning scenario. During the training, only a new sample was used, without knowledge of the entire previous training samples. The results of experiments on the Iris dataset are presented, demonstrating the applicability of the chosen strategy for incremental learning on a compartmental spiking neuron model.
{"title":"Strategy of Incremental Learning on a Compartmental Spiking Neuron Model","authors":"A. M. Korsakov, T. T. Isakov, A. V. Bakhshiev","doi":"10.3103/S1060992X23060073","DOIUrl":"10.3103/S1060992X23060073","url":null,"abstract":"<p>The article presents a method for implementing incremental learning on a compartmental spiking neuron model. The training of one neuron with the possibility of forming new classes was chosen as an incremental learning scenario. During the training, only a new sample was used, without knowledge of the entire previous training samples. The results of experiments on the Iris dataset are presented, demonstrating the applicability of the chosen strategy for incremental learning on a compartmental spiking neuron model.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S237 - S243"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060115
D. A. Tarkhov, D. A. Lavygin, O. A. Skripkin, M. D. Zakirova, T. V. Lazovskaya
The task of managing unstable systems is a critically important management problem, as an unstable object can pose significant danger to humans and the environment when it fails. In this paper, a neural network was trained to determine the optimal control for an unstable system, based on a comparative analysis of two control methods: the implicit Euler method and the linearization method. This neural network identifies the optimal control based on the position of a point on the phase plane.
{"title":"Optimal Control Selection for Stabilizing the Inverted Pendulum Problem Using Neural Network Method","authors":"D. A. Tarkhov, D. A. Lavygin, O. A. Skripkin, M. D. Zakirova, T. V. Lazovskaya","doi":"10.3103/S1060992X23060115","DOIUrl":"10.3103/S1060992X23060115","url":null,"abstract":"<p>The task of managing unstable systems is a critically important management problem, as an unstable object can pose significant danger to humans and the environment when it fails. In this paper, a neural network was trained to determine the optimal control for an unstable system, based on a comparative analysis of two control methods: the implicit Euler method and the linearization method. This neural network identifies the optimal control based on the position of a point on the phase plane.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S214 - S225"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060048
A. V. Demidovskij, M. S. Kazyulina, I. G. Salnikov, A. M. Tugaryov, A. I. Trutnev, S. V. Pavlov
Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for (1.66 times ) acceleration and (3.76 times ) memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.
鉴于深度学习应用的空前增长,训练加速正在成为一个强烈的学术兴趣的主题。Hebbian学习作为一种替代反向传播的训练策略,由于其局域性、较低的计算复杂度和并行化潜力,呈现出一种很有前途的优化方法。然而,由于Hebbian学习的优化具有挑战性,目前还没有广泛接受的方法来实施这种混合策略。本文概述了使用Hebbian规则更新权重的4种主要策略,包括其广泛使用的修改- oja规则和Instar规则。此外,本文还分析了21种Hebbian学习的工业实现,讨论了Hebbian规则的优缺点,并给出了4种卷积网络的计算实验结果。实验表明,与反向传播相比,Hebbian学习最有效的实现策略允许在更新DenseNet121权重时(1.66 times )加速和(3.76 times )内存消耗。最后,对实施策略进行了比较分析,并提出了有根据的Hebbian学习应用建议。
{"title":"Implementation Challenges and Strategies for Hebbian Learning in Convolutional Neural Networks","authors":"A. V. Demidovskij, M. S. Kazyulina, I. G. Salnikov, A. M. Tugaryov, A. I. Trutnev, S. V. Pavlov","doi":"10.3103/S1060992X23060048","DOIUrl":"10.3103/S1060992X23060048","url":null,"abstract":"<p>Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for <span>(1.66 times )</span> acceleration and <span>(3.76 times )</span> memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S252 - S264"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3103/S1060992X23060097
P. Kuderov, E. Dzhivelikian, A. I. Panov
For autonomous AI systems, it is important to process spatiotemporal information to encode and memorize it and extract and reuse abstractions effectively. What is natural for natural intelligence is still a challenge for AI systems. In this paper, we propose a biologically plausible model of spatiotemporal memory with an attractor module and study its ability to encode sequences and efficiently extract and reuse repetitive patterns. The results of experiments on synthetic and textual data and data from DVS cameras demonstrate a qualitative improvement in the properties of the model when using the attractor module.
{"title":"Attractor Properties of Spatiotemporal Memory in Effective Sequence Processing Task","authors":"P. Kuderov, E. Dzhivelikian, A. I. Panov","doi":"10.3103/S1060992X23060097","DOIUrl":"10.3103/S1060992X23060097","url":null,"abstract":"<p>For autonomous AI systems, it is important to process spatiotemporal information to encode and memorize it and extract and reuse abstractions effectively. What is natural for natural intelligence is still a challenge for AI systems. In this paper, we propose a biologically plausible model of spatiotemporal memory with an attractor module and study its ability to encode sequences and efficiently extract and reuse repetitive patterns. The results of experiments on synthetic and textual data and data from DVS cameras demonstrate a qualitative improvement in the properties of the model when using the attractor module.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S284 - S292"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3103/S1060992X23060097.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}