首页 > 最新文献

Doklady Mathematics最新文献

英文 中文
Nonlinear Variational Inequalities with Bilateral Constraints Coinciding on a Set of Positive Measure 非线性变分不等式与正量程集合上的双边约束重合
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-04-18 DOI: 10.1134/S1064562424701813
A. A. Kovalevsky

We consider variational inequalities with invertible operators ({{mathcal{A}}_{s}}{text{:}}~,W_{0}^{{1,p}}left( {{Omega }} right) to {{W}^{{ - 1,p'}}}left( {{Omega }} right),) (s in mathbb{N},) in divergence form and with constraint set (V = { {v} in W_{0}^{{1,p}}left( {{Omega }} right){text{: }}varphi leqslant {v} leqslant psi ~) a.e. in ({{Omega }}} ,) where ({{Omega }}) is a nonempty bounded open set in ({{mathbb{R}}^{n}}) (left( {n geqslant 2} right)), p > 1, and (varphi ,psi {{:;Omega }} to bar {mathbb{R}}) are measurable functions. Under the assumptions that the operators ({{mathcal{A}}_{s}}) G-converge to an invertible operator (mathcal{A}{text{: }}W_{0}^{{1,p}}left( {{Omega }} right) to {{W}^{{ - 1,p'}}}left( {{Omega }} right)), ({text{int}}left{ {varphi = psi } right} ne varnothing ,) ({text{meas}}left( {partial left{ {varphi = psi } right} cap {{Omega }}} right)) = 0, and there exist functions (bar {varphi },bar {psi } in W_{0}^{{1,p}}left( {{Omega }} right)) such that (varphi leqslant overline {varphi ~} leqslant bar {psi } leqslant psi ) a.e. in ({{Omega }}) and ({text{meas}}left( {left{ {varphi ne psi } right}{{backslash }}left{ {bar {varphi } ne bar {psi }} right}} right) = 0,) we establish that the solutions us of the variational inequalities converge weakly in (W_{0}^{{1,p}}left( {{Omega }} right)) to the solution u of a similar variational inequality with the operator (mathcal{A}) and the constraint set V. The fundamental difference of the considered case from the previously studied one in which ({text{meas}}left{ {varphi = psi } right} = 0) is that, in general, the functionals ({{mathcal{A}}_{s}}{{u}_{s}}) do not converge to (mathcal{A}u) even weakly in ({{W}^{{ - 1,p'}}}left( {{Omega }} right)) and the energy integrals (langle {{mathcal{A}}_{s}}{{u}_{s}},{{u}_{s}}rangle ) do not converge to (langle mathcal{A}u,urangle ).

Abstract We consider variational inequalities with invertible operators ({{mathcal{A}}_{s}}{text{:}}~,W_{0}^{{1,p}}}left( {{Omega }} right) to {{W}^{ - 1,p'}}}left( {{Omega }} right),)(s在mathbb{N},)中的发散形式和约束集(V = {v} in W_{0}^{1,p}}left( {{Omega }} right){text{:}}varphi leqslant {v} leqslant psi ~) a.e..in ({{Omega }}} ,) where ({{Omega }}) is a nonempty bounded open set in ({{mathbb{R}}^{n}}) (left( {n geqslant 2} right)), p > 1, and (varphi ,psi {text{:Omega } to bar{mathbb{R}}) 都是可测函数。假设算子 ({{mathcal{A}}_{s}}) G-converge 到一个可逆算子 (mathcal{A}}{text{:W_{0}^{{1,p}}}left( {{Omega }} right) to {{W}^{ -1,p'}}}left( {{Omega }} right)), (({ text{int}}}left{ {varphi = psi } right} ne emptyset 、)({text{meas}}左({partial left{ {varphi = psi } right} cap {Omega }} right))= 0,并且存在函数(bar {varphi },bar {psi })。in W_{0}^{1,p}}left( {{Omega }} right)) such that (varphi leqslant overline {varphi ~})(leqslant) (bar {psi }a.e. in ({{Omega }}) and ({text{meas}}left( {left{ {{varphi nepsi } })right}({{backslash}}) (left) ({bar {varphi }nebar {psi }Rright}right) = 0,()我们确定变分不等式的解 us 在 (W_{0}^{1,p}}left( {{Omega }} right))中弱收敛于具有算子 (mathcal{A})和约束集 V 的类似变分不等式的解 u。所考虑的情况与之前研究的情况({text{meas}}left{ {varphi = psi } right} = 0 )的根本区别在于,一般来说,函数 ({{mathcal{A}}_{s}}{{u}_{s}}) 不会收敛到 ({{W}^{ - 1、p'}}}left({{Omega}}right)),能量积分 (angle {{mathcal{A}}_{s}}{{u}_{s}},{{u}_{s}}rangle )也不会收敛到 (langle mathcal{A}}u,urangle )。
{"title":"Nonlinear Variational Inequalities with Bilateral Constraints Coinciding on a Set of Positive Measure","authors":"A. A. Kovalevsky","doi":"10.1134/S1064562424701813","DOIUrl":"10.1134/S1064562424701813","url":null,"abstract":"<p>We consider variational inequalities with invertible operators <span>({{mathcal{A}}_{s}}{text{:}}~,W_{0}^{{1,p}}left( {{Omega }} right) to {{W}^{{ - 1,p'}}}left( {{Omega }} right),)</span> <span>(s in mathbb{N},)</span> in divergence form and with constraint set <span>(V = { {v} in W_{0}^{{1,p}}left( {{Omega }} right){text{: }}varphi leqslant {v} leqslant psi ~)</span> a.e. in <span>({{Omega }}} ,)</span> where <span>({{Omega }})</span> is a nonempty bounded open set in <span>({{mathbb{R}}^{n}})</span> <span>(left( {n geqslant 2} right))</span>, <i>p</i> &gt; 1, and <span>(varphi ,psi {{:;Omega }} to bar {mathbb{R}})</span> are measurable functions. Under the assumptions that the operators <span>({{mathcal{A}}_{s}})</span> <i>G-</i>converge to an invertible operator <span>(mathcal{A}{text{: }}W_{0}^{{1,p}}left( {{Omega }} right) to {{W}^{{ - 1,p'}}}left( {{Omega }} right))</span>, <span>({text{int}}left{ {varphi = psi } right} ne varnothing ,)</span> <span>({text{meas}}left( {partial left{ {varphi = psi } right} cap {{Omega }}} right))</span> = 0, and there exist functions <span>(bar {varphi },bar {psi } in W_{0}^{{1,p}}left( {{Omega }} right))</span> such that <span>(varphi leqslant overline {varphi ~} leqslant bar {psi } leqslant psi )</span> a.e. in <span>({{Omega }})</span> and <span>({text{meas}}left( {left{ {varphi ne psi } right}{{backslash }}left{ {bar {varphi } ne bar {psi }} right}} right) = 0,)</span> we establish that the solutions <i>u</i><sub><i>s</i></sub> of the variational inequalities converge weakly in <span>(W_{0}^{{1,p}}left( {{Omega }} right))</span> to the solution <i>u</i> of a similar variational inequality with the operator <span>(mathcal{A})</span> and the constraint set <i>V</i>. The fundamental difference of the considered case from the previously studied one in which <span>({text{meas}}left{ {varphi = psi } right} = 0)</span> is that, in general, the functionals <span>({{mathcal{A}}_{s}}{{u}_{s}})</span> do not converge to <span>(mathcal{A}u)</span> even weakly in <span>({{W}^{{ - 1,p'}}}left( {{Omega }} right))</span> and the energy integrals <span>(langle {{mathcal{A}}_{s}}{{u}_{s}},{{u}_{s}}rangle )</span> do not converge to <span>(langle mathcal{A}u,urangle )</span>.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"109 1","pages":"62 - 65"},"PeriodicalIF":0.5,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140625741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Undecidability of Subset Theories of Some Unars 论某些乌纳尔子集理论的不可判定性
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-04-18 DOI: 10.1134/S1064562424701874
B. N. Karlov

This paper is dedicated to studying the algorithmic properties of unars with an injective function. We prove that the theory of every such unar admits quantifier elimination if the language is extended by a countable set of predicate symbols. Necessary and sufficient conditions are established for the quantifier elimination to be effective, and a criterion for decidability of theories of such unars is formulated. Using this criterion, we build a unar such that its theory is decidable, but the theory of the unar of its subsets is undecidable.

摘要 本文致力于研究具有注入函数的单变量的算法特性。我们证明,如果语言是由可数的谓词符号集扩展的,那么每一个这样的unar的理论都允许量词消去。我们建立了量词消去有效的必要条件和充分条件,并提出了此类 Unar 理论的可解性准则。利用这个标准,我们建立了一个unar,使得它的理论是可判定的,但它的子集的unar理论是不可判定的。
{"title":"On Undecidability of Subset Theories of Some Unars","authors":"B. N. Karlov","doi":"10.1134/S1064562424701874","DOIUrl":"10.1134/S1064562424701874","url":null,"abstract":"<p>This paper is dedicated to studying the algorithmic properties of unars with an injective function. We prove that the theory of every such unar admits quantifier elimination if the language is extended by a countable set of predicate symbols. Necessary and sufficient conditions are established for the quantifier elimination to be effective, and a criterion for decidability of theories of such unars is formulated. Using this criterion, we build a unar such that its theory is decidable, but the theory of the unar of its subsets is undecidable.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"109 2","pages":"112 - 116"},"PeriodicalIF":0.5,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140625625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Note on Borsuk’s Problem in Minkowski Spaces 关于闵科夫斯基空间中的博尔苏克问题的说明
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-04-18 DOI: 10.1134/S1064562424701849
A. M. Raigorodskii, A. Sagdeev

In 1993, Kahn and Kalai famously constructed a sequence of finite sets in d-dimensional Euclidean spaces that cannot be partitioned into less than ({{(1.203 ldots + o(1))}^{{sqrt d }}}) parts of smaller diameter. Their method works not only for the Euclidean, but for all ({{ell }_{p}})-spaces as well. In this short note, we observe that the larger the value of p, the stronger this construction becomes.

摘要1993年,卡恩和卡莱在d维欧几里得空间中构建了一个著名的有限集序列,它不能被分割成直径小于({{(1.203 ldots + o(1))}^{{sqrt d }}}) 的部分。他们的方法不仅适用于欧几里得空间,也适用于所有 ({{ell }_{p}})-空间。在这篇短文中,我们观察到 p 的值越大,这种构造就越强。
{"title":"A Note on Borsuk’s Problem in Minkowski Spaces","authors":"A. M. Raigorodskii,&nbsp;A. Sagdeev","doi":"10.1134/S1064562424701849","DOIUrl":"10.1134/S1064562424701849","url":null,"abstract":"<p>In 1993, Kahn and Kalai famously constructed a sequence of finite sets in <i>d</i>-dimensional Euclidean spaces that cannot be partitioned into less than <span>({{(1.203 ldots + o(1))}^{{sqrt d }}})</span> parts of smaller diameter. Their method works not only for the Euclidean, but for all <span>({{ell }_{p}})</span>-spaces as well. In this short note, we observe that the larger the value of <i>p</i>, the stronger this construction becomes.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"109 1","pages":"80 - 83"},"PeriodicalIF":0.5,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140625747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Activations and Gradients Compression for Model-Parallel Training 用于模型并行训练的激活和梯度压缩
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701314
M. I. Rudakov, A. N. Beznosikov, Ya. A. Kholodov, A. V. Gasnikov

Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that (K = 10% ) is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with (K = 30% ) worsens model performance significantly.

大型神经网络需要庞大的机器计算集群。模型并行训练,即在工作人员之间按顺序分割模型架构,是训练现代模型的常用方法。信息压缩可用于减少工作人员的通信时间,因为通信时间往往是此类系统的瓶颈。本研究探讨了在模型并行分布式训练设置中同时压缩激活和梯度对收敛性的影响。我们分析了量化和 TopK 压缩等压缩方法,还尝试了误差补偿技术。此外,我们还采用了 TopK 与 AQ-SGD 每批次误差反馈方法。我们对图像分类和语言模型微调任务进行了实验。我们的研究结果表明,梯度比激活需要更温和的压缩率。我们观察到,(K = 10% )是最低的 TopK 压缩级别,它不会严重损害模型的收敛性。实验还表明,只有在推理过程中也应用压缩时,用 TopK 训练的模型才会表现良好。我们发现,与普通压缩相比,误差反馈技术并不能改善模型并行训练,但在不进行压缩的情况下,模型推理几乎不会出现质量下降。最后,当与 AQ-SGD 方法一起应用时,TopK 强于 (K = 30% )会显著恶化模型性能。
{"title":"Activations and Gradients Compression for Model-Parallel Training","authors":"M. I. Rudakov,&nbsp;A. N. Beznosikov,&nbsp;Ya. A. Kholodov,&nbsp;A. V. Gasnikov","doi":"10.1134/S1064562423701314","DOIUrl":"10.1134/S1064562423701314","url":null,"abstract":"<p>Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that <span>(K = 10% )</span> is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with <span>(K = 30% )</span> worsens model performance significantly.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S272 - S281"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Network Approach to the Problem of Predicting Interest Rate Anomalies under the Influence of Correlated Noise 预测相关噪声影响下利率异常问题的神经网络方法
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701521
G. A. Zotov, P. P. Lukianchenko

The aim of this study is to analyze bifurcation points in financial models using colored noise as a stochastic component. The research investigates the impact of colored noise on change-points and approach to their detection via neural networks. The paper presents a literature review on the use of colored noise in complex systems. The Vasicek stochastic model of interest rates is the object of the research. The research methodology involves approximating numerical solutions of the model using the Euler–Maruyama method, calibrating model parameters, and adjusting the integration step. Methods for detecting bifurcation points and their application to the data are discussed. The study results include the outcomes of an LSTM model trained to detect change-points for models with different types of noise. Results are provided for comparison with various change-point windows and forecast step sizes.

本研究旨在利用彩色噪声作为随机成分,分析金融模型中的分叉点。研究探讨了彩色噪声对变化点的影响以及通过神经网络检测变化点的方法。论文对复杂系统中彩色噪声的使用进行了文献综述。研究对象是 Vasicek 利率随机模型。研究方法包括使用 Euler-Maruyama 方法逼近模型的数值解、校准模型参数和调整积分步骤。讨论了检测分叉点的方法及其在数据中的应用。研究结果包括经过训练的 LSTM 模型的结果,该模型可检测具有不同类型噪声的模型的变化点。还提供了与各种变化点窗口和预测步长进行比较的结果。
{"title":"Neural Network Approach to the Problem of Predicting Interest Rate Anomalies under the Influence of Correlated Noise","authors":"G. A. Zotov,&nbsp;P. P. Lukianchenko","doi":"10.1134/S1064562423701521","DOIUrl":"10.1134/S1064562423701521","url":null,"abstract":"<p>The aim of this study is to analyze bifurcation points in financial models using colored noise as a stochastic component. The research investigates the impact of colored noise on change-points and approach to their detection via neural networks. The paper presents a literature review on the use of colored noise in complex systems. The Vasicek stochastic model of interest rates is the object of the research. The research methodology involves approximating numerical solutions of the model using the Euler–Maruyama method, calibrating model parameters, and adjusting the integration step. Methods for detecting bifurcation points and their application to the data are discussed. The study results include the outcomes of an LSTM model trained to detect change-points for models with different types of noise. Results are provided for comparison with various change-point windows and forecast step sizes.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S293 - S299"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do we Benefit from the Categorization of the News Flow in the Stock Price Prediction Problem? 在股价预测问题中,我们是否能从新闻流的分类中获益?
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701648
T. D. Kulikova, E. Yu. Kovtun, S. A. Budennyy

The power of machine learning is widely leveraged in the task of company stock price prediction. It is essential to incorporate historical stock prices and relevant external world information for constructing a more accurate predictive model. The sentiments of the financial news connected with the company can become such valuable knowledge. However, financial news has different topics, such as Macro, Markets, or Product news. The adoption of such categorization is usually out of scope in a market research. In this work, we aim to close this gap and explore the effect of capturing the news topic differentiation in the stock price prediction problem. Initially, we classify the financial news stream into 20 pre-defined topics with the pre-trained model. Then, we get sentiments and explore the topic of news group sentiment labeling. Moreover, we conduct the experiments with the several well-proved models for time series forecasting, including the Temporal Convolutional Network (TCN), the D-Linear, the Transformer, and the Temporal Fusion Transformer (TFT). In the results of our research, utilizing the information from separate topic groups contributes to a better performance of deep learning models compared to the approach when we consider all news sentiments without any division.

摘要 在公司股价预测任务中,机器学习的威力被广泛利用。要构建更准确的预测模型,必须结合历史股价和相关外部信息。与公司相关的财经新闻情绪就可以成为这种有价值的知识。不过,财经新闻有不同的主题,如宏观新闻、市场新闻或产品新闻。在市场研究中,采用这样的分类通常不在研究范围之内。在这项工作中,我们旨在填补这一空白,并探索在股价预测问题中捕捉新闻主题差异的效果。首先,我们用预先训练好的模型将财经新闻流分为 20 个预定义的主题。然后,我们获取情感并探索新闻组情感标记的主题。此外,我们还利用几种成熟的时间序列预测模型进行了实验,包括时序卷积网络(TCN)、D-线性、变换器和时序融合变换器(TFT)。在我们的研究成果中,与不做任何划分地考虑所有新闻情感的方法相比,利用来自不同主题组的信息有助于提高深度学习模型的性能。
{"title":"Do we Benefit from the Categorization of the News Flow in the Stock Price Prediction Problem?","authors":"T. D. Kulikova,&nbsp;E. Yu. Kovtun,&nbsp;S. A. Budennyy","doi":"10.1134/S1064562423701648","DOIUrl":"10.1134/S1064562423701648","url":null,"abstract":"<p>The power of machine learning is widely leveraged in the task of company stock price prediction. It is essential to incorporate historical stock prices and relevant external world information for constructing a more accurate predictive model. The sentiments of the financial news connected with the company can become such valuable knowledge. However, financial news has different topics, such as <i>Macro</i>, <i>Markets</i>, or <i>Product news</i>. The adoption of such categorization is usually out of scope in a market research. In this work, we aim to close this gap and explore the effect of capturing the news topic differentiation in the stock price prediction problem. Initially, we classify the financial news stream into 20 pre-defined topics with the pre-trained model. Then, we get sentiments and explore the topic of news group sentiment labeling. Moreover, we conduct the experiments with the several well-proved models for time series forecasting, including the Temporal Convolutional Network (TCN), the D-Linear, the Transformer, and the Temporal Fusion Transformer (TFT). In the results of our research, utilizing the information from separate topic groups contributes to a better performance of deep learning models compared to the approach when we consider all news sentiments without any division.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S503 - S510"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning As a Tool to Accelerate the Search for New Materials for Metal-Ion Batteries 将机器学习作为加速寻找金属离子电池新材料的工具
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701612
V. T. Osipov, M. I. Gongola, Ye. A. Morkhova,  A. P. Nemudryi, A. A. Kabanov

The search for new solid ionic conductors is an important topic of material science that requires significant resources, but can be accelerated using machine learning (ML) techniques. In this work, ML methods were applied to predict the migration energy of working ions. The training set is based on data on 225 lithium ion migration channels in 23 ion conductors. The descriptors were the parameters of free space in the crystal obtained by the Voronoi partitioning method. The accuracy of migration energy prediction was evaluated by comparison with the data obtained by the density functional theory method. Two methods of ML were applied in the work: support vector regression and ordinal regression. It is shown that the parameters of free space in a crystal correlate with the migration energy, while the best results are obtained by ordinal regression. The developed ML models can be used as an additional filter in the analysis of ionic conductivity in solids.

摘要寻找新的固体离子导体是材料科学的一个重要课题,需要大量资源,但使用机器学习(ML)技术可以加快这一进程。在这项工作中,ML 方法被用于预测工作离子的迁移能。训练集基于 23 种离子导体中 225 个锂离子迁移通道的数据。描述符是通过 Voronoi 划分法获得的晶体自由空间参数。通过与密度泛函理论方法获得的数据进行比较,评估了迁移能预测的准确性。工作中应用了两种 ML 方法:支持向量回归和序数回归。结果表明,晶体中的自由空间参数与迁移能相关,而序数回归法获得了最佳结果。所开发的 ML 模型可用作分析固体离子传导性的附加过滤器。
{"title":"Machine Learning As a Tool to Accelerate the Search for New Materials for Metal-Ion Batteries","authors":"V. T. Osipov,&nbsp;M. I. Gongola,&nbsp;Ye. A. Morkhova,&nbsp; A. P. Nemudryi,&nbsp;A. A. Kabanov","doi":"10.1134/S1064562423701612","DOIUrl":"10.1134/S1064562423701612","url":null,"abstract":"<p>The search for new solid ionic conductors is an important topic of material science that requires significant resources, but can be accelerated using machine learning (ML) techniques. In this work, ML methods were applied to predict the migration energy of working ions. The training set is based on data on 225 lithium ion migration channels in 23 ion conductors. The descriptors were the parameters of free space in the crystal obtained by the Voronoi partitioning method. The accuracy of migration energy prediction was evaluated by comparison with the data obtained by the density functional theory method. Two methods of ML were applied in the work: support vector regression and ordinal regression. It is shown that the parameters of free space in a crystal correlate with the migration energy, while the best results are obtained by ordinal regression. The developed ML models can be used as an additional filter in the analysis of ionic conductivity in solids.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S476 - S483"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Online Learning in Recurrent and Feedforward Quantum Neural Networks 递归和前馈量子神经网络中的统计在线学习
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701557
S. V. Zuev

For adaptive artificial intelligence systems, the question of the possibility of online learning is especially important, since such training provides adaptation. The purpose of the work is to consider methods of quantum machine online learning for the two most common architectures of quantum neural networks: feedforward and recurrent. The work uses the quantumz module available on PyPI to emulate quantum computing and create artificial quantum neural networks. In addition, the genser module is used to transform data dimensions, which provides reversible transformation of dimensions without loss of information. The data for the experiments are taken from open sources. The paper implements the machine learning method without optimization, proposed by the author earlier. Online learning algorithms for recurrent and feedforward quantum neural network are presented and experimentally confirmed. The proposed learning algorithms can be used as data science tools, as well as a part of adaptive intelligent control systems. The developed software can fully unleash its potential only on quantum computers, but, in the case of a small number of quantum registers, it can also be used in systems that emulate quantum computing, or in photonic computers.

对于自适应人工智能系统来说,在线学习的可能性问题尤为重要,因为这种训练提供了适应性。这项工作的目的是针对量子神经网络最常见的两种架构:前馈和递归,考虑量子机器在线学习的方法。这项工作使用 PyPI 上的 quantumz 模块来模拟量子计算并创建人工量子神经网络。此外,还使用了 genser 模块来转换数据维度,从而在不丢失信息的情况下实现维度的可逆转换。实验数据来自开放源。本文实现了作者早先提出的无需优化的机器学习方法。本文提出了递归量子神经网络和前馈量子神经网络的在线学习算法,并得到了实验证实。提出的学习算法可用作数据科学工具,也可作为自适应智能控制系统的一部分。所开发的软件只有在量子计算机上才能充分释放其潜力,但在有少量量子寄存器的情况下,它也可用于模拟量子计算的系统或光子计算机。
{"title":"Statistical Online Learning in Recurrent and Feedforward Quantum Neural Networks","authors":"S. V. Zuev","doi":"10.1134/S1064562423701557","DOIUrl":"10.1134/S1064562423701557","url":null,"abstract":"<p>For adaptive artificial intelligence systems, the question of the possibility of online learning is especially important, since such training provides adaptation. The purpose of the work is to consider methods of quantum machine online learning for the two most common architectures of quantum neural networks: feedforward and recurrent. The work uses the quantumz module available on PyPI to emulate quantum computing and create artificial quantum neural networks. In addition, the genser module is used to transform data dimensions, which provides reversible transformation of dimensions without loss of information. The data for the experiments are taken from open sources. The paper implements the machine learning method without optimization, proposed by the author earlier. Online learning algorithms for recurrent and feedforward quantum neural network are presented and experimentally confirmed. The proposed learning algorithms can be used as data science tools, as well as a part of adaptive intelligent control systems. The developed software can fully unleash its potential only on quantum computers, but, in the case of a small number of quantum registers, it can also be used in systems that emulate quantum computing, or in photonic computers.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S317 - S324"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTS Kion Implicit Contextualised Sequential Dataset for Movie Recommendation 用于电影推荐的 MTS Kion 隐含语境化序列数据集
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701594
I. Safilo, D. Tikhonovich, A. V. Petrov, D. I. Ignatov

We present a new movie and TV show recommendation dataset collected from the real users of MTS Kion video-on-demand platform. In contrast to other popular movie recommendation datasets, such as MovieLens or Netflix, our dataset is based on the implicit interactions registered at the watching time, rather than on explicit ratings. We also provide rich contextual and side information including interactions characteristics (such as temporal information, watch duration and watch percentage), user demographics and rich movies meta-information. In addition, we describe the MTS Kion Challenge—an online recommender systems challenge that was based on this dataset—and provide an overview of the best performing solutions of the winners. We keep the competition sandbox open, so the researchers are welcome to try their own recommendation algorithms and measure the quality on the private part of the dataset.

摘要 我们提出了一个新的电影和电视节目推荐数据集,该数据集是从 MTS Kion 视频点播平台的真实用户中收集的。与 MovieLens 或 Netflix 等其他流行的电影推荐数据集不同,我们的数据集基于观看时记录的隐式交互,而非显式评分。我们还提供了丰富的上下文和侧面信息,包括互动特征(如时间信息、观看时长和观看百分比)、用户人口统计数据和丰富的电影元信息。此外,我们还介绍了 MTS Kion 挑战赛(基于该数据集的在线推荐系统挑战赛),并概述了优胜者的最佳解决方案。我们保持比赛沙盒的开放性,因此欢迎研究人员尝试自己的推荐算法,并在数据集的私有部分测量其质量。
{"title":"MTS Kion Implicit Contextualised Sequential Dataset for Movie Recommendation","authors":"I. Safilo,&nbsp;D. Tikhonovich,&nbsp;A. V. Petrov,&nbsp;D. I. Ignatov","doi":"10.1134/S1064562423701594","DOIUrl":"10.1134/S1064562423701594","url":null,"abstract":"<p>We present a new movie and TV show recommendation dataset collected from the real users of MTS Kion video-on-demand platform. In contrast to other popular movie recommendation datasets, such as MovieLens or Netflix, our dataset is based on the implicit interactions registered at the watching time, rather than on explicit ratings. We also provide rich contextual and side information including interactions characteristics (such as temporal information, watch duration and watch percentage), user demographics and rich movies meta-information. In addition, we describe the MTS Kion Challenge—an online recommender systems challenge that was based on this dataset—and provide an overview of the best performing solutions of the winners. We keep the competition sandbox open, so the researchers are welcome to try their own recommendation algorithms and measure the quality on the private part of the dataset.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S456 - S464"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Data Splitting in Distributed Optimization for Machine Learning 机器学习分布式优化中的最佳数据分割
IF 0.5 4区 数学 Q3 MATHEMATICS Pub Date : 2024-03-25 DOI: 10.1134/S1064562423701600
D. Medyakov, G. Molodtsov, A. Beznosikov, A. Gasnikov

The distributed optimization problem has become increasingly relevant recently. It has a lot of advantages such as processing a large amount of data in less time compared to non-distributed methods. However, most distributed approaches suffer from a significant bottleneck—the cost of communications. Therefore, a large amount of research has recently been directed at solving this problem. One such approach uses local data similarity. In particular, there exists an algorithm provably optimally exploiting the similarity property. But this result, as well as results from other works solve the communication bottleneck by focusing only on the fact that communication is significantly more expensive than local computing and does not take into account the various capacities of network devices and the different relationship between communication time and local computing expenses. We consider this setup and the objective of this study is to achieve an optimal ratio of distributed data between the server and local machines for any costs of communications and local computations. The running times of the network are compared between uniform and optimal distributions. The superior theoretical performance of our solutions is experimentally validated.

摘要 分布式优化问题近来变得越来越重要。与非分布式方法相比,分布式方法有很多优点,比如能在更短的时间内处理大量数据。然而,大多数分布式方法都存在一个明显的瓶颈--通信成本。因此,最近有大量研究致力于解决这一问题。其中一种方法就是使用本地数据相似性。特别是,有一种算法可以证明是最佳地利用了相似性特性。但这一结果以及其他研究成果在解决通信瓶颈问题时,只关注了通信费用明显高于本地计算费用这一事实,而没有考虑到网络设备的不同容量以及通信时间与本地计算费用之间的不同关系。我们考虑了这种设置,本研究的目标是在通信和本地计算成本不变的情况下,使服务器和本地机器之间的分布式数据达到最佳比例。我们比较了均匀分布和最优分布的网络运行时间。实验验证了我们解决方案的卓越理论性能。
{"title":"Optimal Data Splitting in Distributed Optimization for Machine Learning","authors":"D. Medyakov,&nbsp;G. Molodtsov,&nbsp;A. Beznosikov,&nbsp;A. Gasnikov","doi":"10.1134/S1064562423701600","DOIUrl":"10.1134/S1064562423701600","url":null,"abstract":"<p>The distributed optimization problem has become increasingly relevant recently. It has a lot of advantages such as processing a large amount of data in less time compared to non-distributed methods. However, most distributed approaches suffer from a significant bottleneck—the cost of communications. Therefore, a large amount of research has recently been directed at solving this problem. One such approach uses local data similarity. In particular, there exists an algorithm provably optimally exploiting the similarity property. But this result, as well as results from other works solve the communication bottleneck by focusing only on the fact that communication is significantly more expensive than local computing and does not take into account the various capacities of network devices and the different relationship between communication time and local computing expenses. We consider this setup and the objective of this study is to achieve an optimal ratio of distributed data between the server and local machines for any costs of communications and local computations. The running times of the network are compared between uniform and optimal distributions. The superior theoretical performance of our solutions is experimentally validated.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"108 2 supplement","pages":"S465 - S475"},"PeriodicalIF":0.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140299620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Doklady Mathematics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1