{"title":"Neural Networks as Spin Models: From Glass to Hidden Order Through Training","authors":"Richard Barney, Michael Winer, Victor Galitski","doi":"arxiv-2408.06421","DOIUrl":null,"url":null,"abstract":"We explore a one-to-one correspondence between a neural network (NN) and a\nstatistical mechanical spin model where neurons are mapped to Ising spins and\nweights to spin-spin couplings. The process of training an NN produces a family\nof spin Hamiltonians parameterized by training time. We study the magnetic\nphases and the melting transition temperature as training progresses. First, we\nprove analytically that the common initial state before training--an NN with\nindependent random weights--maps to a layered version of the classical\nSherrington-Kirkpatrick spin glass exhibiting a replica symmetry breaking. The\nspin-glass-to-paramagnet transition temperature is calculated. Further, we use\nthe Thouless-Anderson-Palmer (TAP) equations--a theoretical technique to\nanalyze the landscape of energy minima of random systems--to determine the\nevolution of the magnetic phases on two types of NNs (one with continuous and\none with binarized activations) trained on the MNIST dataset. The two NN types\ngive rise to similar results, showing a quick destruction of the spin glass and\nthe appearance of a phase with a hidden order, whose melting transition\ntemperature $T_c$ grows as a power law in training time. We also discuss the\nproperties of the spectrum of the spin system's bond matrix in the context of\nrich vs. lazy learning. We suggest that this statistical mechanical view of NNs\nprovides a useful unifying perspective on the training process, which can be\nviewed as selecting and strengthening a symmetry-broken state associated with\nthe training task.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Adaptation and Self-Organizing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.06421","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We explore a one-to-one correspondence between a neural network (NN) and a
statistical mechanical spin model where neurons are mapped to Ising spins and
weights to spin-spin couplings. The process of training an NN produces a family
of spin Hamiltonians parameterized by training time. We study the magnetic
phases and the melting transition temperature as training progresses. First, we
prove analytically that the common initial state before training--an NN with
independent random weights--maps to a layered version of the classical
Sherrington-Kirkpatrick spin glass exhibiting a replica symmetry breaking. The
spin-glass-to-paramagnet transition temperature is calculated. Further, we use
the Thouless-Anderson-Palmer (TAP) equations--a theoretical technique to
analyze the landscape of energy minima of random systems--to determine the
evolution of the magnetic phases on two types of NNs (one with continuous and
one with binarized activations) trained on the MNIST dataset. The two NN types
give rise to similar results, showing a quick destruction of the spin glass and
the appearance of a phase with a hidden order, whose melting transition
temperature $T_c$ grows as a power law in training time. We also discuss the
properties of the spectrum of the spin system's bond matrix in the context of
rich vs. lazy learning. We suggest that this statistical mechanical view of NNs
provides a useful unifying perspective on the training process, which can be
viewed as selecting and strengthening a symmetry-broken state associated with
the training task.