Nikhil Vyas, Depen Morwani, Rosie Zhao, Itai Shapira, David Brandfonbrener, Lucas Janson, Sham Kakade
{"title":"SOAP:使用亚当改进和稳定洗发水","authors":"Nikhil Vyas, Depen Morwani, Rosie Zhao, Itai Shapira, David Brandfonbrener, Lucas Janson, Sham Kakade","doi":"arxiv-2409.11321","DOIUrl":null,"url":null,"abstract":"There is growing evidence of the effectiveness of Shampoo, a higher-order\npreconditioning method, over Adam in deep learning optimization tasks. However,\nShampoo's drawbacks include additional hyperparameters and computational\noverhead when compared to Adam, which only updates running averages of first-\nand second-moment quantities. This work establishes a formal connection between\nShampoo (implemented with the 1/2 power) and Adafactor -- a memory-efficient\napproximation of Adam -- showing that Shampoo is equivalent to running\nAdafactor in the eigenbasis of Shampoo's preconditioner. This insight leads to\nthe design of a simpler and computationally efficient algorithm:\n$\\textbf{S}$hampo$\\textbf{O}$ with $\\textbf{A}$dam in the\n$\\textbf{P}$reconditioner's eigenbasis (SOAP). With regards to improving Shampoo's computational efficiency, the most\nstraightforward approach would be to simply compute Shampoo's\neigendecomposition less frequently. Unfortunately, as our empirical results\nshow, this leads to performance degradation that worsens with this frequency.\nSOAP mitigates this degradation by continually updating the running average of\nthe second moment, just as Adam does, but in the current (slowly changing)\ncoordinate basis. Furthermore, since SOAP is equivalent to running Adam in a\nrotated space, it introduces only one additional hyperparameter (the\npreconditioning frequency) compared to Adam. We empirically evaluate SOAP on\nlanguage model pre-training with 360m and 660m sized models. In the large batch\nregime, SOAP reduces the number of iterations by over 40% and wall clock time\nby over 35% compared to AdamW, with approximately 20% improvements in both\nmetrics compared to Shampoo. An implementation of SOAP is available at\nhttps://github.com/nikhilvyas/SOAP.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SOAP: Improving and Stabilizing Shampoo using Adam\",\"authors\":\"Nikhil Vyas, Depen Morwani, Rosie Zhao, Itai Shapira, David Brandfonbrener, Lucas Janson, Sham Kakade\",\"doi\":\"arxiv-2409.11321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is growing evidence of the effectiveness of Shampoo, a higher-order\\npreconditioning method, over Adam in deep learning optimization tasks. However,\\nShampoo's drawbacks include additional hyperparameters and computational\\noverhead when compared to Adam, which only updates running averages of first-\\nand second-moment quantities. This work establishes a formal connection between\\nShampoo (implemented with the 1/2 power) and Adafactor -- a memory-efficient\\napproximation of Adam -- showing that Shampoo is equivalent to running\\nAdafactor in the eigenbasis of Shampoo's preconditioner. This insight leads to\\nthe design of a simpler and computationally efficient algorithm:\\n$\\\\textbf{S}$hampo$\\\\textbf{O}$ with $\\\\textbf{A}$dam in the\\n$\\\\textbf{P}$reconditioner's eigenbasis (SOAP). With regards to improving Shampoo's computational efficiency, the most\\nstraightforward approach would be to simply compute Shampoo's\\neigendecomposition less frequently. Unfortunately, as our empirical results\\nshow, this leads to performance degradation that worsens with this frequency.\\nSOAP mitigates this degradation by continually updating the running average of\\nthe second moment, just as Adam does, but in the current (slowly changing)\\ncoordinate basis. Furthermore, since SOAP is equivalent to running Adam in a\\nrotated space, it introduces only one additional hyperparameter (the\\npreconditioning frequency) compared to Adam. We empirically evaluate SOAP on\\nlanguage model pre-training with 360m and 660m sized models. In the large batch\\nregime, SOAP reduces the number of iterations by over 40% and wall clock time\\nby over 35% compared to AdamW, with approximately 20% improvements in both\\nmetrics compared to Shampoo. An implementation of SOAP is available at\\nhttps://github.com/nikhilvyas/SOAP.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11321\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SOAP: Improving and Stabilizing Shampoo using Adam
There is growing evidence of the effectiveness of Shampoo, a higher-order
preconditioning method, over Adam in deep learning optimization tasks. However,
Shampoo's drawbacks include additional hyperparameters and computational
overhead when compared to Adam, which only updates running averages of first-
and second-moment quantities. This work establishes a formal connection between
Shampoo (implemented with the 1/2 power) and Adafactor -- a memory-efficient
approximation of Adam -- showing that Shampoo is equivalent to running
Adafactor in the eigenbasis of Shampoo's preconditioner. This insight leads to
the design of a simpler and computationally efficient algorithm:
$\textbf{S}$hampo$\textbf{O}$ with $\textbf{A}$dam in the
$\textbf{P}$reconditioner's eigenbasis (SOAP). With regards to improving Shampoo's computational efficiency, the most
straightforward approach would be to simply compute Shampoo's
eigendecomposition less frequently. Unfortunately, as our empirical results
show, this leads to performance degradation that worsens with this frequency.
SOAP mitigates this degradation by continually updating the running average of
the second moment, just as Adam does, but in the current (slowly changing)
coordinate basis. Furthermore, since SOAP is equivalent to running Adam in a
rotated space, it introduces only one additional hyperparameter (the
preconditioning frequency) compared to Adam. We empirically evaluate SOAP on
language model pre-training with 360m and 660m sized models. In the large batch
regime, SOAP reduces the number of iterations by over 40% and wall clock time
by over 35% compared to AdamW, with approximately 20% improvements in both
metrics compared to Shampoo. An implementation of SOAP is available at
https://github.com/nikhilvyas/SOAP.