首页 > 最新文献

Doklady Mathematics最新文献

英文 中文
Activations and Gradients Compression for Model-Parallel Training 用于模型并行训练的激活和梯度压缩
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701314
M. I. Rudakov, A. N. Beznosikov, Ya. A. Kholodov, A. V. Gasnikov

Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that (K = 10% ) is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with (K = 30% ) worsens model performance significantly.

大型神经网络需要庞大的机器计算集群。模型并行训练,即在工作人员之间按顺序分割模型架构,是训练现代模型的常用方法。信息压缩可用于减少工作人员的通信时间,因为通信时间往往是此类系统的瓶颈。本研究探讨了在模型并行分布式训练设置中同时压缩激活和梯度对收敛性的影响。我们分析了量化和 TopK 压缩等压缩方法,还尝试了误差补偿技术。此外,我们还采用了 TopK 与 AQ-SGD 每批次误差反馈方法。我们对图像分类和语言模型微调任务进行了实验。我们的研究结果表明,梯度比激活需要更温和的压缩率。我们观察到,(K = 10% )是最低的 TopK 压缩级别,它不会严重损害模型的收敛性。实验还表明,只有在推理过程中也应用压缩时,用 TopK 训练的模型才会表现良好。我们发现,与普通压缩相比,误差反馈技术并不能改善模型并行训练,但在不进行压缩的情况下,模型推理几乎不会出现质量下降。最后,当与 AQ-SGD 方法一起应用时,TopK 强于 (K = 30% )会显著恶化模型性能。
{"title":"Activations and Gradients Compression for Model-Parallel Training","authors":"M. I. Rudakov,&nbsp;A. N. Beznosikov,&nbsp;Ya. A. Kholodov,&nbsp;A. V. Gasnikov","doi":"10.1134/S1064562423701314","DOIUrl":"10.1134/S1064562423701314","url":null,"abstract":"<p>Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that <span>(K = 10% )</span> is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with <span>(K = 30% )</span> worsens model performance significantly.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S272 - S281"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Network Approach to the Problem of Predicting Interest Rate Anomalies under the Influence of Correlated Noise 预测相关噪声影响下利率异常问题的神经网络方法
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701521
G. A. Zotov, P. P. Lukianchenko

The aim of this study is to analyze bifurcation points in financial models using colored noise as a stochastic component. The research investigates the impact of colored noise on change-points and approach to their detection via neural networks. The paper presents a literature review on the use of colored noise in complex systems. The Vasicek stochastic model of interest rates is the object of the research. The research methodology involves approximating numerical solutions of the model using the Euler–Maruyama method, calibrating model parameters, and adjusting the integration step. Methods for detecting bifurcation points and their application to the data are discussed. The study results include the outcomes of an LSTM model trained to detect change-points for models with different types of noise. Results are provided for comparison with various change-point windows and forecast step sizes.

本研究旨在利用彩色噪声作为随机成分,分析金融模型中的分叉点。研究探讨了彩色噪声对变化点的影响以及通过神经网络检测变化点的方法。论文对复杂系统中彩色噪声的使用进行了文献综述。研究对象是 Vasicek 利率随机模型。研究方法包括使用 Euler-Maruyama 方法逼近模型的数值解、校准模型参数和调整积分步骤。讨论了检测分叉点的方法及其在数据中的应用。研究结果包括经过训练的 LSTM 模型的结果,该模型可检测具有不同类型噪声的模型的变化点。还提供了与各种变化点窗口和预测步长进行比较的结果。
{"title":"Neural Network Approach to the Problem of Predicting Interest Rate Anomalies under the Influence of Correlated Noise","authors":"G. A. Zotov,&nbsp;P. P. Lukianchenko","doi":"10.1134/S1064562423701521","DOIUrl":"10.1134/S1064562423701521","url":null,"abstract":"<p>The aim of this study is to analyze bifurcation points in financial models using colored noise as a stochastic component. The research investigates the impact of colored noise on change-points and approach to their detection via neural networks. The paper presents a literature review on the use of colored noise in complex systems. The Vasicek stochastic model of interest rates is the object of the research. The research methodology involves approximating numerical solutions of the model using the Euler–Maruyama method, calibrating model parameters, and adjusting the integration step. Methods for detecting bifurcation points and their application to the data are discussed. The study results include the outcomes of an LSTM model trained to detect change-points for models with different types of noise. Results are provided for comparison with various change-point windows and forecast step sizes.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S293 - S299"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do we Benefit from the Categorization of the News Flow in the Stock Price Prediction Problem? 在股价预测问题中,我们是否能从新闻流的分类中获益?
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701648
T. D. Kulikova, E. Yu. Kovtun, S. A. Budennyy

The power of machine learning is widely leveraged in the task of company stock price prediction. It is essential to incorporate historical stock prices and relevant external world information for constructing a more accurate predictive model. The sentiments of the financial news connected with the company can become such valuable knowledge. However, financial news has different topics, such as Macro, Markets, or Product news. The adoption of such categorization is usually out of scope in a market research. In this work, we aim to close this gap and explore the effect of capturing the news topic differentiation in the stock price prediction problem. Initially, we classify the financial news stream into 20 pre-defined topics with the pre-trained model. Then, we get sentiments and explore the topic of news group sentiment labeling. Moreover, we conduct the experiments with the several well-proved models for time series forecasting, including the Temporal Convolutional Network (TCN), the D-Linear, the Transformer, and the Temporal Fusion Transformer (TFT). In the results of our research, utilizing the information from separate topic groups contributes to a better performance of deep learning models compared to the approach when we consider all news sentiments without any division.

摘要 在公司股价预测任务中,机器学习的威力被广泛利用。要构建更准确的预测模型,必须结合历史股价和相关外部信息。与公司相关的财经新闻情绪就可以成为这种有价值的知识。不过,财经新闻有不同的主题,如宏观新闻、市场新闻或产品新闻。在市场研究中,采用这样的分类通常不在研究范围之内。在这项工作中,我们旨在填补这一空白,并探索在股价预测问题中捕捉新闻主题差异的效果。首先,我们用预先训练好的模型将财经新闻流分为 20 个预定义的主题。然后,我们获取情感并探索新闻组情感标记的主题。此外,我们还利用几种成熟的时间序列预测模型进行了实验,包括时序卷积网络(TCN)、D-线性、变换器和时序融合变换器(TFT)。在我们的研究成果中,与不做任何划分地考虑所有新闻情感的方法相比,利用来自不同主题组的信息有助于提高深度学习模型的性能。
{"title":"Do we Benefit from the Categorization of the News Flow in the Stock Price Prediction Problem?","authors":"T. D. Kulikova,&nbsp;E. Yu. Kovtun,&nbsp;S. A. Budennyy","doi":"10.1134/S1064562423701648","DOIUrl":"10.1134/S1064562423701648","url":null,"abstract":"<p>The power of machine learning is widely leveraged in the task of company stock price prediction. It is essential to incorporate historical stock prices and relevant external world information for constructing a more accurate predictive model. The sentiments of the financial news connected with the company can become such valuable knowledge. However, financial news has different topics, such as <i>Macro</i>, <i>Markets</i>, or <i>Product news</i>. The adoption of such categorization is usually out of scope in a market research. In this work, we aim to close this gap and explore the effect of capturing the news topic differentiation in the stock price prediction problem. Initially, we classify the financial news stream into 20 pre-defined topics with the pre-trained model. Then, we get sentiments and explore the topic of news group sentiment labeling. Moreover, we conduct the experiments with the several well-proved models for time series forecasting, including the Temporal Convolutional Network (TCN), the D-Linear, the Transformer, and the Temporal Fusion Transformer (TFT). In the results of our research, utilizing the information from separate topic groups contributes to a better performance of deep learning models compared to the approach when we consider all news sentiments without any division.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S503 - S510"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning As a Tool to Accelerate the Search for New Materials for Metal-Ion Batteries 将机器学习作为加速寻找金属离子电池新材料的工具
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701612
V. T. Osipov, M. I. Gongola, Ye. A. Morkhova,  A. P. Nemudryi, A. A. Kabanov

The search for new solid ionic conductors is an important topic of material science that requires significant resources, but can be accelerated using machine learning (ML) techniques. In this work, ML methods were applied to predict the migration energy of working ions. The training set is based on data on 225 lithium ion migration channels in 23 ion conductors. The descriptors were the parameters of free space in the crystal obtained by the Voronoi partitioning method. The accuracy of migration energy prediction was evaluated by comparison with the data obtained by the density functional theory method. Two methods of ML were applied in the work: support vector regression and ordinal regression. It is shown that the parameters of free space in a crystal correlate with the migration energy, while the best results are obtained by ordinal regression. The developed ML models can be used as an additional filter in the analysis of ionic conductivity in solids.

摘要寻找新的固体离子导体是材料科学的一个重要课题,需要大量资源,但使用机器学习(ML)技术可以加快这一进程。在这项工作中,ML 方法被用于预测工作离子的迁移能。训练集基于 23 种离子导体中 225 个锂离子迁移通道的数据。描述符是通过 Voronoi 划分法获得的晶体自由空间参数。通过与密度泛函理论方法获得的数据进行比较,评估了迁移能预测的准确性。工作中应用了两种 ML 方法:支持向量回归和序数回归。结果表明,晶体中的自由空间参数与迁移能相关,而序数回归法获得了最佳结果。所开发的 ML 模型可用作分析固体离子传导性的附加过滤器。
{"title":"Machine Learning As a Tool to Accelerate the Search for New Materials for Metal-Ion Batteries","authors":"V. T. Osipov,&nbsp;M. I. Gongola,&nbsp;Ye. A. Morkhova,&nbsp; A. P. Nemudryi,&nbsp;A. A. Kabanov","doi":"10.1134/S1064562423701612","DOIUrl":"10.1134/S1064562423701612","url":null,"abstract":"<p>The search for new solid ionic conductors is an important topic of material science that requires significant resources, but can be accelerated using machine learning (ML) techniques. In this work, ML methods were applied to predict the migration energy of working ions. The training set is based on data on 225 lithium ion migration channels in 23 ion conductors. The descriptors were the parameters of free space in the crystal obtained by the Voronoi partitioning method. The accuracy of migration energy prediction was evaluated by comparison with the data obtained by the density functional theory method. Two methods of ML were applied in the work: support vector regression and ordinal regression. It is shown that the parameters of free space in a crystal correlate with the migration energy, while the best results are obtained by ordinal regression. The developed ML models can be used as an additional filter in the analysis of ionic conductivity in solids.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S476 - S483"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Online Learning in Recurrent and Feedforward Quantum Neural Networks 递归和前馈量子神经网络中的统计在线学习
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701557
S. V. Zuev

For adaptive artificial intelligence systems, the question of the possibility of online learning is especially important, since such training provides adaptation. The purpose of the work is to consider methods of quantum machine online learning for the two most common architectures of quantum neural networks: feedforward and recurrent. The work uses the quantumz module available on PyPI to emulate quantum computing and create artificial quantum neural networks. In addition, the genser module is used to transform data dimensions, which provides reversible transformation of dimensions without loss of information. The data for the experiments are taken from open sources. The paper implements the machine learning method without optimization, proposed by the author earlier. Online learning algorithms for recurrent and feedforward quantum neural network are presented and experimentally confirmed. The proposed learning algorithms can be used as data science tools, as well as a part of adaptive intelligent control systems. The developed software can fully unleash its potential only on quantum computers, but, in the case of a small number of quantum registers, it can also be used in systems that emulate quantum computing, or in photonic computers.

对于自适应人工智能系统来说,在线学习的可能性问题尤为重要,因为这种训练提供了适应性。这项工作的目的是针对量子神经网络最常见的两种架构:前馈和递归,考虑量子机器在线学习的方法。这项工作使用 PyPI 上的 quantumz 模块来模拟量子计算并创建人工量子神经网络。此外,还使用了 genser 模块来转换数据维度,从而在不丢失信息的情况下实现维度的可逆转换。实验数据来自开放源。本文实现了作者早先提出的无需优化的机器学习方法。本文提出了递归量子神经网络和前馈量子神经网络的在线学习算法,并得到了实验证实。提出的学习算法可用作数据科学工具,也可作为自适应智能控制系统的一部分。所开发的软件只有在量子计算机上才能充分释放其潜力,但在有少量量子寄存器的情况下,它也可用于模拟量子计算的系统或光子计算机。
{"title":"Statistical Online Learning in Recurrent and Feedforward Quantum Neural Networks","authors":"S. V. Zuev","doi":"10.1134/S1064562423701557","DOIUrl":"10.1134/S1064562423701557","url":null,"abstract":"<p>For adaptive artificial intelligence systems, the question of the possibility of online learning is especially important, since such training provides adaptation. The purpose of the work is to consider methods of quantum machine online learning for the two most common architectures of quantum neural networks: feedforward and recurrent. The work uses the quantumz module available on PyPI to emulate quantum computing and create artificial quantum neural networks. In addition, the genser module is used to transform data dimensions, which provides reversible transformation of dimensions without loss of information. The data for the experiments are taken from open sources. The paper implements the machine learning method without optimization, proposed by the author earlier. Online learning algorithms for recurrent and feedforward quantum neural network are presented and experimentally confirmed. The proposed learning algorithms can be used as data science tools, as well as a part of adaptive intelligent control systems. The developed software can fully unleash its potential only on quantum computers, but, in the case of a small number of quantum registers, it can also be used in systems that emulate quantum computing, or in photonic computers.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S317 - S324"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTS Kion Implicit Contextualised Sequential Dataset for Movie Recommendation 用于电影推荐的 MTS Kion 隐含语境化序列数据集
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701594
I. Safilo, D. Tikhonovich, A. V. Petrov, D. I. Ignatov

We present a new movie and TV show recommendation dataset collected from the real users of MTS Kion video-on-demand platform. In contrast to other popular movie recommendation datasets, such as MovieLens or Netflix, our dataset is based on the implicit interactions registered at the watching time, rather than on explicit ratings. We also provide rich contextual and side information including interactions characteristics (such as temporal information, watch duration and watch percentage), user demographics and rich movies meta-information. In addition, we describe the MTS Kion Challenge—an online recommender systems challenge that was based on this dataset—and provide an overview of the best performing solutions of the winners. We keep the competition sandbox open, so the researchers are welcome to try their own recommendation algorithms and measure the quality on the private part of the dataset.

摘要 我们提出了一个新的电影和电视节目推荐数据集,该数据集是从 MTS Kion 视频点播平台的真实用户中收集的。与 MovieLens 或 Netflix 等其他流行的电影推荐数据集不同,我们的数据集基于观看时记录的隐式交互,而非显式评分。我们还提供了丰富的上下文和侧面信息,包括互动特征(如时间信息、观看时长和观看百分比)、用户人口统计数据和丰富的电影元信息。此外,我们还介绍了 MTS Kion 挑战赛(基于该数据集的在线推荐系统挑战赛),并概述了优胜者的最佳解决方案。我们保持比赛沙盒的开放性,因此欢迎研究人员尝试自己的推荐算法,并在数据集的私有部分测量其质量。
{"title":"MTS Kion Implicit Contextualised Sequential Dataset for Movie Recommendation","authors":"I. Safilo,&nbsp;D. Tikhonovich,&nbsp;A. V. Petrov,&nbsp;D. I. Ignatov","doi":"10.1134/S1064562423701594","DOIUrl":"10.1134/S1064562423701594","url":null,"abstract":"<p>We present a new movie and TV show recommendation dataset collected from the real users of MTS Kion video-on-demand platform. In contrast to other popular movie recommendation datasets, such as MovieLens or Netflix, our dataset is based on the implicit interactions registered at the watching time, rather than on explicit ratings. We also provide rich contextual and side information including interactions characteristics (such as temporal information, watch duration and watch percentage), user demographics and rich movies meta-information. In addition, we describe the MTS Kion Challenge—an online recommender systems challenge that was based on this dataset—and provide an overview of the best performing solutions of the winners. We keep the competition sandbox open, so the researchers are welcome to try their own recommendation algorithms and measure the quality on the private part of the dataset.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S456 - S464"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Data Splitting in Distributed Optimization for Machine Learning 机器学习分布式优化中的最佳数据分割
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701600
D. Medyakov, G. Molodtsov, A. Beznosikov, A. Gasnikov

The distributed optimization problem has become increasingly relevant recently. It has a lot of advantages such as processing a large amount of data in less time compared to non-distributed methods. However, most distributed approaches suffer from a significant bottleneck—the cost of communications. Therefore, a large amount of research has recently been directed at solving this problem. One such approach uses local data similarity. In particular, there exists an algorithm provably optimally exploiting the similarity property. But this result, as well as results from other works solve the communication bottleneck by focusing only on the fact that communication is significantly more expensive than local computing and does not take into account the various capacities of network devices and the different relationship between communication time and local computing expenses. We consider this setup and the objective of this study is to achieve an optimal ratio of distributed data between the server and local machines for any costs of communications and local computations. The running times of the network are compared between uniform and optimal distributions. The superior theoretical performance of our solutions is experimentally validated.

摘要 分布式优化问题近来变得越来越重要。与非分布式方法相比,分布式方法有很多优点,比如能在更短的时间内处理大量数据。然而,大多数分布式方法都存在一个明显的瓶颈--通信成本。因此,最近有大量研究致力于解决这一问题。其中一种方法就是使用本地数据相似性。特别是,有一种算法可以证明是最佳地利用了相似性特性。但这一结果以及其他研究成果在解决通信瓶颈问题时,只关注了通信费用明显高于本地计算费用这一事实,而没有考虑到网络设备的不同容量以及通信时间与本地计算费用之间的不同关系。我们考虑了这种设置,本研究的目标是在通信和本地计算成本不变的情况下,使服务器和本地机器之间的分布式数据达到最佳比例。我们比较了均匀分布和最优分布的网络运行时间。实验验证了我们解决方案的卓越理论性能。
{"title":"Optimal Data Splitting in Distributed Optimization for Machine Learning","authors":"D. Medyakov,&nbsp;G. Molodtsov,&nbsp;A. Beznosikov,&nbsp;A. Gasnikov","doi":"10.1134/S1064562423701600","DOIUrl":"10.1134/S1064562423701600","url":null,"abstract":"<p>The distributed optimization problem has become increasingly relevant recently. It has a lot of advantages such as processing a large amount of data in less time compared to non-distributed methods. However, most distributed approaches suffer from a significant bottleneck—the cost of communications. Therefore, a large amount of research has recently been directed at solving this problem. One such approach uses local data similarity. In particular, there exists an algorithm provably optimally exploiting the similarity property. But this result, as well as results from other works solve the communication bottleneck by focusing only on the fact that communication is significantly more expensive than local computing and does not take into account the various capacities of network devices and the different relationship between communication time and local computing expenses. We consider this setup and the objective of this study is to achieve an optimal ratio of distributed data between the server and local machines for any costs of communications and local computations. The running times of the network are compared between uniform and optimal distributions. The superior theoretical performance of our solutions is experimentally validated.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S465 - S475"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140299620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
1-Dimensional Topological Invariants to Estimate Loss Surface Non-Convexity 估算损失面非凸性的一维拓扑不变式
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701569
D. S. Voronkova, S. A. Barannikov, E. V. Burnaev

We utilize the framework of topological data analysis to examine the geometry of loss landscape. With the use of topology and Morse theory, we propose to analyse 1-dimensional topological invariants as a measure of loss function non-convexity up to arbitrary re-parametrization. The proposed approach uses optimization of 2-dimensional simplices in network weights space and allows to conduct both qualitative and quantitative evaluation of loss landscape to gain insights into behavior and optimization of neural networks. We provide geometrical interpretation of the topological invariants and describe the algorithm for their computation. We expect that the proposed approach can complement the existing tools for analysis of loss landscape and shed light on unresolved issues in the field of deep learning.

摘要 我们利用拓扑数据分析框架来研究损失景观的几何形状。利用拓扑学和莫尔斯理论,我们提出分析一维拓扑不变量,将其作为损失函数在任意重参数化条件下的非凸性度量。我们提出的方法使用网络权重空间中的二维简约优化,可以对损失景观进行定性和定量评估,从而深入了解神经网络的行为和优化。我们提供了拓扑不变式的几何解释,并描述了计算拓扑不变式的算法。我们希望所提出的方法能够补充现有的损失景观分析工具,并揭示深度学习领域尚未解决的问题。
{"title":"1-Dimensional Topological Invariants to Estimate Loss Surface Non-Convexity","authors":"D. S. Voronkova,&nbsp;S. A. Barannikov,&nbsp;E. V. Burnaev","doi":"10.1134/S1064562423701569","DOIUrl":"10.1134/S1064562423701569","url":null,"abstract":"<p>We utilize the framework of topological data analysis to examine the geometry of loss landscape. With the use of topology and Morse theory, we propose to analyse 1-dimensional topological invariants as a measure of loss function non-convexity up to arbitrary re-parametrization. The proposed approach uses optimization of 2-dimensional simplices in network weights space and allows to conduct both qualitative and quantitative evaluation of loss landscape to gain insights into behavior and optimization of neural networks. We provide geometrical interpretation of the topological invariants and describe the algorithm for their computation. We expect that the proposed approach can complement the existing tools for analysis of loss landscape and shed light on unresolved issues in the field of deep learning.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S325 - S332"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language 在合成伪语言中安全预训练深度语言模型
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701636
T. E. Gorbacheva, I. Y. Bondarenko

This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.

摘要 本文比较了转换器对自然语言文本和合成伪语言句子的预训练。人工文本是根据无上下文语法规则自动生成的。为完成 RussianSuperGLUE 项目的任务而进行的微调结果可靠地表明,两种模型的得分相同。也就是说,人工文本的使用有利于人工智能的安全性,因为它可以完全控制数据集的组成。此外,在类 RoBERTa 模型的预训练阶段,只需学习识别语言的句法和形态模式即可,这可以通过相当简单的方法(如无上下文语法)成功创建。
{"title":"Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language","authors":"T. E. Gorbacheva,&nbsp;I. Y. Bondarenko","doi":"10.1134/S1064562423701636","DOIUrl":"10.1134/S1064562423701636","url":null,"abstract":"<p>This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S494 - S502"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Analysis of Method with Batching for Monotone Stochastic Finite-Sum Variational Inequalities 单调随机有限和变分不等式的批处理方法优化分析
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701582
A. Pichugin, M. Pechin, A. Beznosikov, A. Savchenko, A. Gasnikov

Variational inequalities are a universal optimization paradigm that is interesting in itself, but also incorporates classical minimization and saddle point problems. Modern realities encourage to consider stochastic formulations of optimization problems. In this paper, we present an analysis of a method that gives optimal convergence estimates for monotone stochastic finite-sum variational inequalities. In contrast to the previous works, our method supports batching and does not lose the oracle complexity optimality. The effectiveness of the algorithm, especially in the case of small but not single batches is confirmed experimentally.

摘要 变量不等式是一种通用的优化范式,它本身就很有趣,而且还包含经典的最小化和鞍点问题。现代社会鼓励考虑优化问题的随机形式。本文分析了一种方法,它能给出单调随机有限和变分不等式的最佳收敛估计值。与前人的研究相比,我们的方法支持批处理,而且不会失去甲骨文复杂性的最优性。实验证实了该算法的有效性,尤其是在批量较小但不单一的情况下。
{"title":"Optimal Analysis of Method with Batching for Monotone Stochastic Finite-Sum Variational Inequalities","authors":"A. Pichugin,&nbsp;M. Pechin,&nbsp;A. Beznosikov,&nbsp;A. Savchenko,&nbsp;A. Gasnikov","doi":"10.1134/S1064562423701582","DOIUrl":"10.1134/S1064562423701582","url":null,"abstract":"<p>Variational inequalities are a universal optimization paradigm that is interesting in itself, but also incorporates classical minimization and saddle point problems. Modern realities encourage to consider stochastic formulations of optimization problems. In this paper, we present an analysis of a method that gives optimal convergence estimates for monotone stochastic finite-sum variational inequalities. In contrast to the previous works, our method supports batching and does not lose the oracle complexity optimality. The effectiveness of the algorithm, especially in the case of small but not single batches is confirmed experimentally.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S348 - S359"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Doklady Mathematics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1