Steve J. Kongni, Thierry Njougouo, Patrick Louodop, Robert Tchitnga, Fernando F. Ferreira, Hilda A. Cerdeira
Systems of oscillators whose internal phases and spatial dynamics are coupled, swarmalators, present diverse collective behaviors which in some cases lead to explosive synchronization in a finite population as a function of the coupling parameter between internal phases. Near the synchronization transition, the phase energy of the particles is represented by the XY model, and they undergo a transition which can be of the first order or second depending on the distribution of natural frequencies of their internal dynamics. The first order transition is obtained after an intermediate state (Static Wings Phase Wave state (SWPW)) from which the nodes, in cascade over time, achieve complete phase synchronization at a precise value of the coupling constant. For a particular case of natural frequencies distribution, a new phenomenon of Rotational Splintered Phase Wave state (RSpPW) is observed and leads progressively to synchronization through clusters switching alternatively from one to two and for which the frequency decreases as the phase coupling increases.
{"title":"Expected and unexpected routes to synchronization in a system of swarmalators","authors":"Steve J. Kongni, Thierry Njougouo, Patrick Louodop, Robert Tchitnga, Fernando F. Ferreira, Hilda A. Cerdeira","doi":"arxiv-2409.10039","DOIUrl":"https://doi.org/arxiv-2409.10039","url":null,"abstract":"Systems of oscillators whose internal phases and spatial dynamics are\u0000coupled, swarmalators, present diverse collective behaviors which in some cases\u0000lead to explosive synchronization in a finite population as a function of the\u0000coupling parameter between internal phases. Near the synchronization\u0000transition, the phase energy of the particles is represented by the XY model,\u0000and they undergo a transition which can be of the first order or second\u0000depending on the distribution of natural frequencies of their internal\u0000dynamics. The first order transition is obtained after an intermediate state\u0000(Static Wings Phase Wave state (SWPW)) from which the nodes, in cascade over\u0000time, achieve complete phase synchronization at a precise value of the coupling\u0000constant. For a particular case of natural frequencies distribution, a new\u0000phenomenon of Rotational Splintered Phase Wave state (RSpPW) is observed and\u0000leads progressively to synchronization through clusters switching alternatively\u0000from one to two and for which the frequency decreases as the phase coupling\u0000increases.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengke Wei, Andreas Amann, Oleksandr Burylko, Xiujing Han, Serhiy Yanchuk, Jürgen Kurths
Adaptive dynamical networks are ubiquitous in real-world systems. This paper aims to explore the synchronization dynamics in networks of adaptive oscillators based on a paradigmatic system of adaptively coupled phase oscillators. Our numerical observations reveal the emergence of synchronization cluster bursting, characterized by periodic transitions between cluster synchronization and global synchronization. By investigating a reduced model, the mechanisms underlying synchronization cluster bursting are clarified. We show that a minimal model exhibiting this phenomenon can be reduced to a phase oscillator with complex-valued adaptation. Furthermore, the adaptivity of the system leads to the appearance of additional symmetries and thus to the coexistence of stable bursting solutions with very different Kuramoto order parameters.
{"title":"Synchronization cluster bursting in adaptive oscillators networks","authors":"Mengke Wei, Andreas Amann, Oleksandr Burylko, Xiujing Han, Serhiy Yanchuk, Jürgen Kurths","doi":"arxiv-2409.08348","DOIUrl":"https://doi.org/arxiv-2409.08348","url":null,"abstract":"Adaptive dynamical networks are ubiquitous in real-world systems. This paper\u0000aims to explore the synchronization dynamics in networks of adaptive\u0000oscillators based on a paradigmatic system of adaptively coupled phase\u0000oscillators. Our numerical observations reveal the emergence of synchronization\u0000cluster bursting, characterized by periodic transitions between cluster\u0000synchronization and global synchronization. By investigating a reduced model,\u0000the mechanisms underlying synchronization cluster bursting are clarified. We\u0000show that a minimal model exhibiting this phenomenon can be reduced to a phase\u0000oscillator with complex-valued adaptation. Furthermore, the adaptivity of the\u0000system leads to the appearance of additional symmetries and thus to the\u0000coexistence of stable bursting solutions with very different Kuramoto order\u0000parameters.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a simple model of swarmalators subject to periodic forcing and confined to move around a one-dimensional ring. This is a toy model for physical systems with a mix of sync, swarming, and forcing such as colloidal micromotors. We find several emergent macrostates and characterize the phase boundaries between them analytically. The most novel state is a swarmalator chimera, where the population splits into two sync dots, which enclose a `train' of swarmalators that run around a peanut-shaped loop.
{"title":"The forced one-dimensional swarmalator model","authors":"Md Sayeed Anwar, Dibakar Ghosh, Kevin O'Keeffe","doi":"arxiv-2409.05342","DOIUrl":"https://doi.org/arxiv-2409.05342","url":null,"abstract":"We study a simple model of swarmalators subject to periodic forcing and\u0000confined to move around a one-dimensional ring. This is a toy model for\u0000physical systems with a mix of sync, swarming, and forcing such as colloidal\u0000micromotors. We find several emergent macrostates and characterize the phase\u0000boundaries between them analytically. The most novel state is a swarmalator\u0000chimera, where the population splits into two sync dots, which enclose a\u0000`train' of swarmalators that run around a peanut-shaped loop.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142208012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajad Jafari, Atiyeh Bayani, Fatemeh Parastesh, Karthikeyan Rajagopal, Charo I. del Genio, Ludovico Minati, Stefano Boccaletti
The Master Stability Function is a robust and useful tool for determining the conditions of synchronization stability in a network of coupled systems. While a comprehensive classification exists in the case in which the nodes are chaotic dynamical systems, its application to periodic systems has been less explored. By studying several well-known periodic systems, we establish a comprehensive framework to understand and classify their properties of synchronizability. This allows us to define five distinct classes of synchronization stability, including some that are unique to periodic systems. Specifically, in periodic systems, the Master Stability Function vanishes at the origin, and it can therefore display behavioral classes that are not achievable in chaotic systems, where it starts, instead, at a strictly positive value. Moreover, our results challenge the widely-held belief that periodic systems are easily put in a stable synchronous state, showing, instead, the common occurrence of a lower threshold for synchronization stability.
{"title":"Periodic systems have new classes of synchronization stability","authors":"Sajad Jafari, Atiyeh Bayani, Fatemeh Parastesh, Karthikeyan Rajagopal, Charo I. del Genio, Ludovico Minati, Stefano Boccaletti","doi":"arxiv-2409.04193","DOIUrl":"https://doi.org/arxiv-2409.04193","url":null,"abstract":"The Master Stability Function is a robust and useful tool for determining the\u0000conditions of synchronization stability in a network of coupled systems. While\u0000a comprehensive classification exists in the case in which the nodes are\u0000chaotic dynamical systems, its application to periodic systems has been less\u0000explored. By studying several well-known periodic systems, we establish a\u0000comprehensive framework to understand and classify their properties of\u0000synchronizability. This allows us to define five distinct classes of\u0000synchronization stability, including some that are unique to periodic systems.\u0000Specifically, in periodic systems, the Master Stability Function vanishes at\u0000the origin, and it can therefore display behavioral classes that are not\u0000achievable in chaotic systems, where it starts, instead, at a strictly positive\u0000value. Moreover, our results challenge the widely-held belief that periodic\u0000systems are easily put in a stable synchronous state, showing, instead, the\u0000common occurrence of a lower threshold for synchronization stability.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we address the reduced-order synchronization problem between two chaotic memristive Hindmarsh-Rose (HR) neurons of different orders using two distinct methods. The first method employs the Lyapunov active control technique. Through this technique, we develop appropriate control functions to synchronize a 4D chaotic HR neuron (response system) with the canonical projection of a 5D chaotic HR neuron (drive system). Numerical simulations are provided to demonstrate the effectiveness of this approach. The second method is data-driven and leverages a machine learning-based control technique. Our technique utilizes an ad hoc combination of reservoir computing (RC) algorithms, incorporating reservoir observer (RO), online control (OC), and online predictive control (OPC) algorithms. We anticipate our effective heuristic RC adaptive control algorithm to guide the development of more formally structured and systematic, data-driven RC control approaches to chaotic synchronization problems, and to inspire more data-driven neuromorphic methods for controlling and achieving synchronization in chaotic neural networks in vivo.
{"title":"Reduced-order adaptive synchronization in a chaotic neural network with parameter mismatch: A dynamical system vs. machine learning approach","authors":"Jan Kobiolka, Jens Habermann, Marius E. Yamakou","doi":"arxiv-2408.16155","DOIUrl":"https://doi.org/arxiv-2408.16155","url":null,"abstract":"In this paper, we address the reduced-order synchronization problem between\u0000two chaotic memristive Hindmarsh-Rose (HR) neurons of different orders using\u0000two distinct methods. The first method employs the Lyapunov active control\u0000technique. Through this technique, we develop appropriate control functions to\u0000synchronize a 4D chaotic HR neuron (response system) with the canonical\u0000projection of a 5D chaotic HR neuron (drive system). Numerical simulations are\u0000provided to demonstrate the effectiveness of this approach. The second method\u0000is data-driven and leverages a machine learning-based control technique. Our\u0000technique utilizes an ad hoc combination of reservoir computing (RC)\u0000algorithms, incorporating reservoir observer (RO), online control (OC), and\u0000online predictive control (OPC) algorithms. We anticipate our effective\u0000heuristic RC adaptive control algorithm to guide the development of more\u0000formally structured and systematic, data-driven RC control approaches to\u0000chaotic synchronization problems, and to inspire more data-driven neuromorphic\u0000methods for controlling and achieving synchronization in chaotic neural\u0000networks in vivo.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corbit R. Sampson, Mason A. Porter, Juan G. Restrepo
In traditional models of opinion dynamics, each agent in a network has an opinion and changes in opinions arise from pairwise (i.e., dyadic) interactions between agents. However, in many situations, groups of individuals can possess a collective opinion that may differ from the opinions of the individuals. In this paper, we study the effects of group opinions on opinion dynamics. We formulate a hypergraph model in which both individual agents and groups of 3 agents have opinions, and we examine how opinions evolve through both dyadic interactions and group memberships. In some parameter regimes, we find that the presence of group opinions can lead to oscillatory and excitable opinion dynamics. In the oscillatory regime, the mean opinion of the agents in a network has self-sustained oscillations. In the excitable regime, finite-size effects create large but short-lived opinion swings (as in social fads). We develop a mean-field approximation of our model and obtain good agreement with direct numerical simulations. We also show, both numerically and via our mean-field description, that oscillatory dynamics occur only when the number of dyadic and polyadic interactions per agent are not completely correlated. Our results illustrate how polyadic structures, such as groups of agents, can have important effects on collective opinion dynamics.
{"title":"Oscillatory and Excitable Dynamics in an Opinion Model with Group Opinions","authors":"Corbit R. Sampson, Mason A. Porter, Juan G. Restrepo","doi":"arxiv-2408.13336","DOIUrl":"https://doi.org/arxiv-2408.13336","url":null,"abstract":"In traditional models of opinion dynamics, each agent in a network has an\u0000opinion and changes in opinions arise from pairwise (i.e., dyadic) interactions\u0000between agents. However, in many situations, groups of individuals can possess\u0000a collective opinion that may differ from the opinions of the individuals. In\u0000this paper, we study the effects of group opinions on opinion dynamics. We\u0000formulate a hypergraph model in which both individual agents and groups of 3\u0000agents have opinions, and we examine how opinions evolve through both dyadic\u0000interactions and group memberships. In some parameter regimes, we find that the\u0000presence of group opinions can lead to oscillatory and excitable opinion\u0000dynamics. In the oscillatory regime, the mean opinion of the agents in a\u0000network has self-sustained oscillations. In the excitable regime, finite-size\u0000effects create large but short-lived opinion swings (as in social fads). We\u0000develop a mean-field approximation of our model and obtain good agreement with\u0000direct numerical simulations. We also show, both numerically and via our\u0000mean-field description, that oscillatory dynamics occur only when the number of\u0000dyadic and polyadic interactions per agent are not completely correlated. Our\u0000results illustrate how polyadic structures, such as groups of agents, can have\u0000important effects on collective opinion dynamics.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual cues play crucial roles in the collective motion of animals, birds, fish, and insects. The interaction mediated by visual information is essentially non-local and has many-body nature due to occlusion, which poses a challenging problem in modeling the emergent collective behavior. In this Letter, we introduce a Boltzmann-equation approach incorporating non-local visual interaction. Occlusion is treated in a self-consistent manner via a coarse-grained density field, which renders the interaction effectively pairwise. Our model also incorporates the recent finding that each organism stochastically selects a neighbor to interact at each instant. We analytically derive the order-disorder transition point, and show that the visual screening effect substantially raises the transition threshold, which does not vanish when the density of the agents or the range of the intrinsic interaction is taken to infinity. Our analysis suggests that the model exhibits a discontinuous transition as in the local interaction models, and but the discontinuity is weakened by the non-locality. Our study clarifies the essential role of non-locality in the visual interactions among moving organisms.
{"title":"Boltzmann approach to collective motion via non-local visual interaction","authors":"Susumu Ito, Nariya Uchida","doi":"arxiv-2408.09917","DOIUrl":"https://doi.org/arxiv-2408.09917","url":null,"abstract":"Visual cues play crucial roles in the collective motion of animals, birds,\u0000fish, and insects. The interaction mediated by visual information is\u0000essentially non-local and has many-body nature due to occlusion, which poses a\u0000challenging problem in modeling the emergent collective behavior. In this\u0000Letter, we introduce a Boltzmann-equation approach incorporating non-local\u0000visual interaction. Occlusion is treated in a self-consistent manner via a\u0000coarse-grained density field, which renders the interaction effectively\u0000pairwise. Our model also incorporates the recent finding that each organism\u0000stochastically selects a neighbor to interact at each instant. We analytically\u0000derive the order-disorder transition point, and show that the visual screening\u0000effect substantially raises the transition threshold, which does not vanish\u0000when the density of the agents or the range of the intrinsic interaction is\u0000taken to infinity. Our analysis suggests that the model exhibits a\u0000discontinuous transition as in the local interaction models, and but the\u0000discontinuity is weakened by the non-locality. Our study clarifies the\u0000essential role of non-locality in the visual interactions among moving\u0000organisms.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-organization in complex systems is a process in which randomness is reduced and emergent structures appear that allow the system to function in a more competitive way with other states of the system or with other systems. It occurs only in the presence of energy gradients, facilitating energy transmission through the system and entropy production. Being a dynamic process, self-organization requires a dynamic measure and dynamic principles. The principles of decreasing unit action and increasing total action are two dynamic variational principles that are viable to utilize in a self-organizing system. Based on this, average action efficiency can serve as a quantitative measure of the degree of self-organization. Positive feedback loops connect this measure with all other characteristics of a complex system, providing all of them with a mechanism for exponential growth, and indicating power law relationships between each of them as confirmed by data and simulations. In this study, we apply those principles and the model to agent-based simulations. We find that those principles explain self-organization well and that the results confirm the model. By measuring action efficiency we can have a new answer to the question: "What is complexity and how complex is a system?". This work shows the explanatory and predictive power of those models, which can help understand and design better complex systems.
{"title":"Why and How do Complex Systems Self-Organize at All? Average Action Efficiency as a Predictor, Measure, Driver, and Mechanism of Self-Organization","authors":"Matthew J Brouillet, Georgi Yordanov Georgiev","doi":"arxiv-2408.10278","DOIUrl":"https://doi.org/arxiv-2408.10278","url":null,"abstract":"Self-organization in complex systems is a process in which randomness is\u0000reduced and emergent structures appear that allow the system to function in a\u0000more competitive way with other states of the system or with other systems. It\u0000occurs only in the presence of energy gradients, facilitating energy\u0000transmission through the system and entropy production. Being a dynamic\u0000process, self-organization requires a dynamic measure and dynamic principles.\u0000The principles of decreasing unit action and increasing total action are two\u0000dynamic variational principles that are viable to utilize in a self-organizing\u0000system. Based on this, average action efficiency can serve as a quantitative\u0000measure of the degree of self-organization. Positive feedback loops connect\u0000this measure with all other characteristics of a complex system, providing all\u0000of them with a mechanism for exponential growth, and indicating power law\u0000relationships between each of them as confirmed by data and simulations. In\u0000this study, we apply those principles and the model to agent-based simulations.\u0000We find that those principles explain self-organization well and that the\u0000results confirm the model. By measuring action efficiency we can have a new\u0000answer to the question: \"What is complexity and how complex is a system?\". This\u0000work shows the explanatory and predictive power of those models, which can help\u0000understand and design better complex systems.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We explore a one-to-one correspondence between a neural network (NN) and a statistical mechanical spin model where neurons are mapped to Ising spins and weights to spin-spin couplings. The process of training an NN produces a family of spin Hamiltonians parameterized by training time. We study the magnetic phases and the melting transition temperature as training progresses. First, we prove analytically that the common initial state before training--an NN with independent random weights--maps to a layered version of the classical Sherrington-Kirkpatrick spin glass exhibiting a replica symmetry breaking. The spin-glass-to-paramagnet transition temperature is calculated. Further, we use the Thouless-Anderson-Palmer (TAP) equations--a theoretical technique to analyze the landscape of energy minima of random systems--to determine the evolution of the magnetic phases on two types of NNs (one with continuous and one with binarized activations) trained on the MNIST dataset. The two NN types give rise to similar results, showing a quick destruction of the spin glass and the appearance of a phase with a hidden order, whose melting transition temperature $T_c$ grows as a power law in training time. We also discuss the properties of the spectrum of the spin system's bond matrix in the context of rich vs. lazy learning. We suggest that this statistical mechanical view of NNs provides a useful unifying perspective on the training process, which can be viewed as selecting and strengthening a symmetry-broken state associated with the training task.
我们探索了神经网络(NN)与统计机械自旋模型之间的一一对应关系,其中神经元映射为伊辛自旋,权重映射为自旋-自旋耦合。训练神经网络的过程产生了以训练时间为参数的自旋哈密顿族。我们研究了训练过程中的磁相和熔化转变温度。首先,我们通过分析证明了训练前的共同初始状态--具有独立随机权重的 NN--映射为经典谢林顿-柯克帕特里克自旋玻璃的分层版本,表现出复制对称性破缺。我们计算了自旋玻璃到准磁体的转变温度。此外,我们还利用 Thouless-Anderson-Palmer (TAP) 方程--一种分析随机系统能量极小值景观的理论技术--确定了在 MNIST 数据集上训练的两类 NN(一类是连续激活,另一类是二值化激活)上磁性相位的演变。这两种 NN 得到了相似的结果,显示了自旋玻璃的快速破坏和具有隐序的相的出现,其熔化转变温度 $T_c$ 在训练时间内呈幂律增长。我们还讨论了自旋系统键矩阵频谱在 "富学习 "与 "懒学习 "背景下的特性。我们认为,这种关于 NN 的统计力学观点为训练过程提供了一个有用的统一视角,训练过程可以看作是选择和加强与训练任务相关的对称性破坏状态。
{"title":"Neural Networks as Spin Models: From Glass to Hidden Order Through Training","authors":"Richard Barney, Michael Winer, Victor Galitski","doi":"arxiv-2408.06421","DOIUrl":"https://doi.org/arxiv-2408.06421","url":null,"abstract":"We explore a one-to-one correspondence between a neural network (NN) and a\u0000statistical mechanical spin model where neurons are mapped to Ising spins and\u0000weights to spin-spin couplings. The process of training an NN produces a family\u0000of spin Hamiltonians parameterized by training time. We study the magnetic\u0000phases and the melting transition temperature as training progresses. First, we\u0000prove analytically that the common initial state before training--an NN with\u0000independent random weights--maps to a layered version of the classical\u0000Sherrington-Kirkpatrick spin glass exhibiting a replica symmetry breaking. The\u0000spin-glass-to-paramagnet transition temperature is calculated. Further, we use\u0000the Thouless-Anderson-Palmer (TAP) equations--a theoretical technique to\u0000analyze the landscape of energy minima of random systems--to determine the\u0000evolution of the magnetic phases on two types of NNs (one with continuous and\u0000one with binarized activations) trained on the MNIST dataset. The two NN types\u0000give rise to similar results, showing a quick destruction of the spin glass and\u0000the appearance of a phase with a hidden order, whose melting transition\u0000temperature $T_c$ grows as a power law in training time. We also discuss the\u0000properties of the spectrum of the spin system's bond matrix in the context of\u0000rich vs. lazy learning. We suggest that this statistical mechanical view of NNs\u0000provides a useful unifying perspective on the training process, which can be\u0000viewed as selecting and strengthening a symmetry-broken state associated with\u0000the training task.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debasish Saha, Sonja Tarama, Hartmut Löwen, Stefan U. Egelhaaf
Colloids play an important role in fundamental science as well as in nature and technology. They have had a strong impact on the fundamental understanding of statistical physics. For example, colloids have helped to obtain a better understanding of collective phenomena, ranging from phase transitions and glass formation to the swarming of active Brownian particles. Yet the success of colloidal systems hinges crucially on the specific physical and chemical properties of the colloidal particles, i.e. particles with the appropriate characteristics must be available. Here we present an idea to create particles with freely selectable properties. The properties might depend, for example, on the presence of other particles (hence mimicking specific pair or many-body interactions), previous configurations (hence introducing some memory or feedback), or a directional bias (hence changing the dynamics). Without directly interfering with the sample, each particle is fully controlled and can receive external commands through a predefined algorithm that can take into account any input parameters. This is realized with computer-controlled colloids, which we term cybloids - short for cybernetic colloids. The potential of cybloids is illustrated by programming a time-delayed external potential acting on a single colloid and interaction potentials for many colloids. Both an attractive harmonic potential and an annular potential are implemented. For a single particle, this programming can cause subdiffusive behavior or lend activity. For many colloids, the programmed interaction potential allows to select a crystal structure at wish. Beyond these examples, we discuss further opportunities which cybloids offer.
{"title":"Cybloids $-$ Creation and Control of Cybernetic Colloids","authors":"Debasish Saha, Sonja Tarama, Hartmut Löwen, Stefan U. Egelhaaf","doi":"arxiv-2408.00336","DOIUrl":"https://doi.org/arxiv-2408.00336","url":null,"abstract":"Colloids play an important role in fundamental science as well as in nature\u0000and technology. They have had a strong impact on the fundamental understanding\u0000of statistical physics. For example, colloids have helped to obtain a better\u0000understanding of collective phenomena, ranging from phase transitions and glass\u0000formation to the swarming of active Brownian particles. Yet the success of\u0000colloidal systems hinges crucially on the specific physical and chemical\u0000properties of the colloidal particles, i.e. particles with the appropriate\u0000characteristics must be available. Here we present an idea to create particles\u0000with freely selectable properties. The properties might depend, for example, on\u0000the presence of other particles (hence mimicking specific pair or many-body\u0000interactions), previous configurations (hence introducing some memory or\u0000feedback), or a directional bias (hence changing the dynamics). Without\u0000directly interfering with the sample, each particle is fully controlled and can\u0000receive external commands through a predefined algorithm that can take into\u0000account any input parameters. This is realized with computer-controlled\u0000colloids, which we term cybloids - short for cybernetic colloids. The potential\u0000of cybloids is illustrated by programming a time-delayed external potential\u0000acting on a single colloid and interaction potentials for many colloids. Both\u0000an attractive harmonic potential and an annular potential are implemented. For\u0000a single particle, this programming can cause subdiffusive behavior or lend\u0000activity. For many colloids, the programmed interaction potential allows to\u0000select a crystal structure at wish. Beyond these examples, we discuss further\u0000opportunities which cybloids offer.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141884384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}