快速离散拉普拉斯变换

Yen Lee Loh
{"title":"快速离散拉普拉斯变换","authors":"Yen Lee Loh","doi":"10.1016/j.jcmds.2023.100082","DOIUrl":null,"url":null,"abstract":"<div><p>The discrete Laplace transform (DLT) with <span><math><mi>M</mi></math></span> inputs and <span><math><mi>N</mi></math></span> outputs has a nominal computational cost of <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mi>N</mi><mo>)</mo></mrow></mrow></math></span>. There are approximate DLT algorithms with <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow></math></span> cost such that the output errors divided by the <em>sum of the inputs</em> are less than a fixed tolerance <span><math><mi>η</mi></math></span>. However, certain important applications of DLTs require a more stringent accuracy criterion, where the output errors divided by the <em>true output values</em> are less than <span><math><mi>η</mi></math></span>. We present a fast DLT algorithm combining two strategies. The bottom-up strategy exploits the Taylor expansion of the Laplace transform kernel. The top-down strategy chooses groups of terms in the DLT to include or neglect, based on the whole summand, and not just on the Laplace transform kernel. The overall effort is <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow></math></span> when the source and target points are very dense or very sparse, and appears to be <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow><mrow><mn>1</mn><mo>.</mo><mn>5</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> in the intermediate regime. Our algorithm achieves the same accuracy as brute-force evaluation, and is typically 10–100 times faster.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"8 ","pages":"Article 100082"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Fast discrete Laplace transforms\",\"authors\":\"Yen Lee Loh\",\"doi\":\"10.1016/j.jcmds.2023.100082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The discrete Laplace transform (DLT) with <span><math><mi>M</mi></math></span> inputs and <span><math><mi>N</mi></math></span> outputs has a nominal computational cost of <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mi>N</mi><mo>)</mo></mrow></mrow></math></span>. There are approximate DLT algorithms with <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow></math></span> cost such that the output errors divided by the <em>sum of the inputs</em> are less than a fixed tolerance <span><math><mi>η</mi></math></span>. However, certain important applications of DLTs require a more stringent accuracy criterion, where the output errors divided by the <em>true output values</em> are less than <span><math><mi>η</mi></math></span>. We present a fast DLT algorithm combining two strategies. The bottom-up strategy exploits the Taylor expansion of the Laplace transform kernel. The top-down strategy chooses groups of terms in the DLT to include or neglect, based on the whole summand, and not just on the Laplace transform kernel. The overall effort is <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow></math></span> when the source and target points are very dense or very sparse, and appears to be <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mrow><mo>(</mo><mi>M</mi><mo>+</mo><mi>N</mi><mo>)</mo></mrow></mrow><mrow><mn>1</mn><mo>.</mo><mn>5</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> in the intermediate regime. Our algorithm achieves the same accuracy as brute-force evaluation, and is typically 10–100 times faster.</p></div>\",\"PeriodicalId\":100768,\"journal\":{\"name\":\"Journal of Computational Mathematics and Data Science\",\"volume\":\"8 \",\"pages\":\"Article 100082\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Mathematics and Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772415823000093\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Mathematics and Data Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772415823000093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

具有M个输入和N个输出的离散拉普拉斯变换(DLT)具有O(MN)的标称计算成本。存在具有O(M+N)代价的近似DLT算法,使得输出误差除以输入之和小于固定容差η。然而,DLT的某些重要应用需要更严格的精度标准,其中输出误差除以真实输出值小于η。我们提出了一种结合两种策略的快速DLT算法。自下而上的策略利用拉普拉斯变换核的泰勒展开。自上而下的策略选择DLT中的项组来包括或忽略,这是基于整个被加数,而不仅仅是基于拉普拉斯变换核。当源点和目标点非常密集或非常稀疏时,总体努力是O(M+N),并且在中间状态下看起来是O((M+N)1.5)。我们的算法实现了与蛮力评估相同的准确性,通常速度快10-100倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fast discrete Laplace transforms

The discrete Laplace transform (DLT) with M inputs and N outputs has a nominal computational cost of O(MN). There are approximate DLT algorithms with O(M+N) cost such that the output errors divided by the sum of the inputs are less than a fixed tolerance η. However, certain important applications of DLTs require a more stringent accuracy criterion, where the output errors divided by the true output values are less than η. We present a fast DLT algorithm combining two strategies. The bottom-up strategy exploits the Taylor expansion of the Laplace transform kernel. The top-down strategy chooses groups of terms in the DLT to include or neglect, based on the whole summand, and not just on the Laplace transform kernel. The overall effort is O(M+N) when the source and target points are very dense or very sparse, and appears to be O((M+N)1.5) in the intermediate regime. Our algorithm achieves the same accuracy as brute-force evaluation, and is typically 10–100 times faster.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.00
自引率
0.00%
发文量
0
期刊最新文献
Efficiency of the multisection method Bayesian optimization of one-dimensional convolutional neural networks (1D CNN) for early diagnosis of Autistic Spectrum Disorder Novel color space representation extracted by NMF to segment a color image Enhanced MRI brain tumor detection and classification via topological data analysis and low-rank tensor decomposition Artifact removal from ECG signals using online recursive independent component analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1