首页 > 最新文献

2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)最新文献

英文 中文
Globally Optimal Energy Efficiency Maximization for Capacity-Limited Fronthaul Crans with Dynamic Power Amplifiers’ Efficiency 基于动态功率放大器效率的有限容量前传起重机全局最优能效最大化
K. Nguyen, Quang-Doanh Vu, Le-Nam Tran, M. Juntti
A joint beamforming and remote radio head (RRH)-user association design for downlink of cloud radio access networks (CRANs) is considered. The aim is to maximize the system energy efficiency subject to constraints on users' quality-of-service, capacity offronthaullinks and transmit power. Different to the conventional power consumption models, we embrace the dependence of baseband signal processing power on the data rate, and the dynamics of the power amplifiers' efficiency. The considered problem is a mixed Boolean nonconvex program whose optimal solution is difficult to find. As our main contribution, we provide a discrete branch-reduce-and-bound (DBRnB) approach to solve the problem globally. We also make some modifications to the standard DBRnB procedure. Those remarkably improve the convergence performance. Numerical results are provided to confirm the validity of the proposed method.
研究了一种用于云无线接入网下行链路的联合波束形成和远程无线电头(RRH)用户关联设计。其目的是在受用户服务质量、前路容量和传输功率限制的情况下,最大限度地提高系统能源效率。与传统的功耗模型不同,我们考虑了基带信号处理能力与数据速率的关系,以及功率放大器效率的动态变化。所考虑的问题是一个难以找到最优解的混合布尔非凸规划。作为我们的主要贡献,我们提供了一个离散分支约界(DBRnB)方法来解决全局问题。我们还对标准DBRnB程序做了一些修改。显著提高了收敛性能。数值结果验证了所提方法的有效性。
{"title":"Globally Optimal Energy Efficiency Maximization for Capacity-Limited Fronthaul Crans with Dynamic Power Amplifiers’ Efficiency","authors":"K. Nguyen, Quang-Doanh Vu, Le-Nam Tran, M. Juntti","doi":"10.1109/ICASSP.2018.8461308","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461308","url":null,"abstract":"A joint beamforming and remote radio head (RRH)-user association design for downlink of cloud radio access networks (CRANs) is considered. The aim is to maximize the system energy efficiency subject to constraints on users' quality-of-service, capacity offronthaullinks and transmit power. Different to the conventional power consumption models, we embrace the dependence of baseband signal processing power on the data rate, and the dynamics of the power amplifiers' efficiency. The considered problem is a mixed Boolean nonconvex program whose optimal solution is difficult to find. As our main contribution, we provide a discrete branch-reduce-and-bound (DBRnB) approach to solve the problem globally. We also make some modifications to the standard DBRnB procedure. Those remarkably improve the convergence performance. Numerical results are provided to confirm the validity of the proposed method.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"346 1","pages":"3759-3763"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83456041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Orthogonality-Regularized Masked NMF for Learning on Weakly Labeled Audio Data 用于弱标记音频数据学习的正交正则化掩膜NMF
I. Sobieraj, Lucas Rencker, Mark D. Plumbley
Non-negative Matrix Factorization (NMF) is a well established tool for audio analysis. However, it is not well suited for learning on weakly labeled data, i.e. data where the exact timestamp of the sound of interest is not known. In this paper we propose a novel extension to NMF, that allows it to extract meaningful representations from weakly labeled audio data. Recently, a constraint on the activation matrix was proposed to adapt for learning on weak labels. To further improve the method we propose to add an orthogonality regularizer of the dictionary in the cost function of NMF. In that way we obtain appropriate dictionaries for the sounds of interest and background sounds from weakly labeled data. We demonstrate that the proposed Orthogonality-Regularized Masked NMF (ORM-NMF) can be used for Audio Event Detection of rare events and evaluate the method on the development data from Task2 of DCASE2017 Challenge.
非负矩阵分解(NMF)是一种成熟的音频分析工具。然而,它不太适合弱标记数据的学习,即不知道感兴趣声音的确切时间戳的数据。在本文中,我们提出了一种新的NMF扩展,允许它从弱标记音频数据中提取有意义的表示。最近,人们提出了一种对激活矩阵的约束来适应弱标签的学习。为了进一步改进该方法,我们提出在NMF的代价函数中加入字典的正交正则化器。通过这种方式,我们从弱标记数据中获得感兴趣的声音和背景声音的适当字典。我们证明了所提出的正交正则化掩膜NMF (ORM-NMF)可用于罕见事件的音频事件检测,并在DCASE2017挑战赛任务2的开发数据上对该方法进行了评估。
{"title":"Orthogonality-Regularized Masked NMF for Learning on Weakly Labeled Audio Data","authors":"I. Sobieraj, Lucas Rencker, Mark D. Plumbley","doi":"10.1109/ICASSP.2018.8461293","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461293","url":null,"abstract":"Non-negative Matrix Factorization (NMF) is a well established tool for audio analysis. However, it is not well suited for learning on weakly labeled data, i.e. data where the exact timestamp of the sound of interest is not known. In this paper we propose a novel extension to NMF, that allows it to extract meaningful representations from weakly labeled audio data. Recently, a constraint on the activation matrix was proposed to adapt for learning on weak labels. To further improve the method we propose to add an orthogonality regularizer of the dictionary in the cost function of NMF. In that way we obtain appropriate dictionaries for the sounds of interest and background sounds from weakly labeled data. We demonstrate that the proposed Orthogonality-Regularized Masked NMF (ORM-NMF) can be used for Audio Event Detection of rare events and evaluate the method on the development data from Task2 of DCASE2017 Challenge.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"34 1","pages":"2436-2440"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90380650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Individual Difference of Ultrasonic Transducers for Parametric Array Loudspeaker 参数阵列扬声器超声换能器的个体差异
Shota Minami, Jun Kuroda, Yasuhiro Oikawa
A parametric array loudspeaker (PAL) consists of a lot of ultrasonic transducers in most cases and is driven by an ultrasonic which is modulated by audible sound. Because each ultrasonic transducer has each difference resonant frequency, there is the individual difference in ultrasonic transducers of a PAL in a manufacturing process. In this paper, two PALs are made of each set of transducers with large and small variance of resonant frequencies. Quality factor of PAL with the large variance of resonant frequencies is smaller than that of PAL with small variance, and the demodulated audible sound pressure level (SPL) is large and almost flat to 3 kHz in PAL with the large variance of resonant frequencies.
参数阵列扬声器(PAL)通常由许多超声波换能器组成,由可听声调制的超声波驱动。由于每个超声波换能器都有不同的谐振频率,因此在制造过程中,一个PAL的超声波换能器存在个体差异。在本文中,每组换能器的共振频率有大的和小的变化,组成两个pal。谐振频率方差大的PAL质量因子小于方差小的PAL质量因子,谐振频率方差大的PAL解调的可听声压级(SPL)大且几乎平坦到3 kHz。
{"title":"Individual Difference of Ultrasonic Transducers for Parametric Array Loudspeaker","authors":"Shota Minami, Jun Kuroda, Yasuhiro Oikawa","doi":"10.1109/ICASSP.2018.8462189","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462189","url":null,"abstract":"A parametric array loudspeaker (PAL) consists of a lot of ultrasonic transducers in most cases and is driven by an ultrasonic which is modulated by audible sound. Because each ultrasonic transducer has each difference resonant frequency, there is the individual difference in ultrasonic transducers of a PAL in a manufacturing process. In this paper, two PALs are made of each set of transducers with large and small variance of resonant frequencies. Quality factor of PAL with the large variance of resonant frequencies is smaller than that of PAL with small variance, and the demodulated audible sound pressure level (SPL) is large and almost flat to 3 kHz in PAL with the large variance of resonant frequencies.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"48 1","pages":"486-490"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83428538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Quickest Detection of Dynamic Events in Sensor Networks 传感器网络中动态事件的快速检测
Shaofeng Zou, V. Veeravalli
We consider the problem of quickest detection of dynamic events in sensor networks. After an event occurs, a number of sensors are affected and undergo a change in the statistics of their observations. We assume that the event is dynamic and can propagate with time, i.e., different sensors perceive the event at different times. The goal is to design a sequential algorithm that can detect when the event has affected no less than η sensors as quickly as possible, subject to false alarm constraints. We design a computationally efficient algorithm that is adaptive to unknown propagation dynamics, and demonstrate its asymptotic optimality as the false alarm rate goes to zero. We also provide numerical simulations to validate our theoretical results.
研究了传感器网络中动态事件的快速检测问题。事件发生后,许多传感器都会受到影响,其观测数据会发生变化。我们假设事件是动态的,可以随时间传播,即不同的传感器在不同的时间感知事件。目标是设计一种顺序算法,能够在不受假警报约束的情况下,尽快检测到事件何时影响不小于η传感器。我们设计了一种计算效率高的自适应未知传播动态的算法,并证明了当虚警率趋于零时其渐近最优性。我们还提供了数值模拟来验证我们的理论结果。
{"title":"Quickest Detection of Dynamic Events in Sensor Networks","authors":"Shaofeng Zou, V. Veeravalli","doi":"10.1109/ICASSP.2018.8461854","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461854","url":null,"abstract":"We consider the problem of quickest detection of dynamic events in sensor networks. After an event occurs, a number of sensors are affected and undergo a change in the statistics of their observations. We assume that the event is dynamic and can propagate with time, i.e., different sensors perceive the event at different times. The goal is to design a sequential algorithm that can detect when the event has affected no less than η sensors as quickly as possible, subject to false alarm constraints. We design a computationally efficient algorithm that is adaptive to unknown propagation dynamics, and demonstrate its asymptotic optimality as the false alarm rate goes to zero. We also provide numerical simulations to validate our theoretical results.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"5 1","pages":"6907-6911"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85207765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
On Selecting Antenna Placements in Indoor Radio Environments 关于室内无线电环境中天线放置位置的选择
Fabian Agren, Johan Sward, A. Jakobsson
In this work, we introduce an antenna placement algorithm for indoor radio networks. The algorithm aims to minimize the number of antennas required to provide sufficient coverage in an area of interest, minimizing the cost of equipment and installation work. The optimization algorithm exploits a semi-deterministic model for the most dominant radio paths. Each path is in turn determined with the A⋆ path finding algorithm. Both the proposed antenna placement algorithm and the used indoor radio propagation model are evaluated using real measurements, confirming the efficiency of the method.
本文介绍了一种室内无线网络的天线布置算法。该算法旨在最大限度地减少在感兴趣的区域提供足够覆盖所需的天线数量,最大限度地减少设备和安装工作的成本。优化算法利用半确定性的最优无线电路径模型。每个路径依次由A -百科寻路算法确定。利用实测数据对所提出的天线放置算法和室内无线电传播模型进行了评价,验证了该方法的有效性。
{"title":"On Selecting Antenna Placements in Indoor Radio Environments","authors":"Fabian Agren, Johan Sward, A. Jakobsson","doi":"10.1109/ICASSP.2018.8462374","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8462374","url":null,"abstract":"In this work, we introduce an antenna placement algorithm for indoor radio networks. The algorithm aims to minimize the number of antennas required to provide sufficient coverage in an area of interest, minimizing the cost of equipment and installation work. The optimization algorithm exploits a semi-deterministic model for the most dominant radio paths. Each path is in turn determined with the A⋆ path finding algorithm. Both the proposed antenna placement algorithm and the used indoor radio propagation model are evaluated using real measurements, confirming the efficiency of the method.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"9 1","pages":"3719-3723"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81959547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Penalized Method for the Predictive Limit of Learning 学习预测极限的惩罚方法
Jie Ding, Enmao Diao, Jiawei Zhou, V. Tarokh
Machine learning systems learn from and make predictions by building models from observed data. Because large models tend to overfit while small models tend to underfit for a given fixed dataset, a critical challenge is to select an appropriate model (e.g. set of variables/features). Model selection aims to strike a balance between the goodness of fit and model complexity, and thus to gain reliable predictive power. In this paper, we study a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. We prove that the proposed procedure is both statistically efficient in the sense that it asymptotically approaches the limit of learning, and computationally efficient in the sense that it can be much faster than cross validation methods. Our theory applies for a wide variety of model classes, loss functions, and high dimensions (in the sense that the models' complexity can grow with data size). We released a python package with our proposed method for general usage like logistic regression and neural networks.
机器学习系统通过从观察到的数据中建立模型来学习和预测。因为对于给定的固定数据集,大模型倾向于过拟合,而小模型倾向于欠拟合,所以一个关键的挑战是选择一个合适的模型(例如一组变量/特征)。模型选择的目的是在拟合优度和模型复杂性之间取得平衡,从而获得可靠的预测能力。在本文中,我们研究了一种惩罚模型选择技术,该技术渐近地达到一组候选模型提供的最优预期预测损失(称为学习极限)。我们证明了所提出的过程在统计上是有效的,因为它渐进地接近学习的极限,在计算上是有效的,因为它可以比交叉验证方法快得多。我们的理论适用于各种各样的模型类、损失函数和高维(在这种意义上,模型的复杂性可以随着数据大小而增长)。我们发布了一个python包,其中包含我们提出的方法,用于逻辑回归和神经网络等一般用途。
{"title":"A Penalized Method for the Predictive Limit of Learning","authors":"Jie Ding, Enmao Diao, Jiawei Zhou, V. Tarokh","doi":"10.1109/ICASSP.2018.8461832","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461832","url":null,"abstract":"Machine learning systems learn from and make predictions by building models from observed data. Because large models tend to overfit while small models tend to underfit for a given fixed dataset, a critical challenge is to select an appropriate model (e.g. set of variables/features). Model selection aims to strike a balance between the goodness of fit and model complexity, and thus to gain reliable predictive power. In this paper, we study a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. We prove that the proposed procedure is both statistically efficient in the sense that it asymptotically approaches the limit of learning, and computationally efficient in the sense that it can be much faster than cross validation methods. Our theory applies for a wide variety of model classes, loss functions, and high dimensions (in the sense that the models' complexity can grow with data size). We released a python package with our proposed method for general usage like logistic regression and neural networks.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"38 1","pages":"4414-4418"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86695862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Concurrent Clutter and Noise Suppression via Low Rank Plus Sparse Optimization for Non-Contrast Ultrasound Flow Doppler Processing in Microvasculature 基于低秩加稀疏优化的微血管非对比超声血流多普勒处理杂波和噪声抑制
Mahdi Bayat, M. Fatemi
A low rank plus sparse framework for concurrent clutter and noise suppression in Doppler processing of echo ensembles obtained by non-contrast ultrasound imaging is presented. A low rank component which represents mostly strong tissue clutter signal and a sparse component which represents mostly blood echoes received from slow flows in microvasculature are assumed. The proposed method is applied to simulated data and its superior performance over conventional singular value thresholding in removing clutter and background noise is presented.
提出了一种低秩加稀疏框架,用于非对比超声成像回波群的多普勒处理中杂波和噪声的抑制。假设低秩分量主要代表强组织杂波信号,稀疏分量主要代表微血管慢血流收到的血液回波。将该方法应用于仿真数据,结果表明该方法在去除杂波和背景噪声方面优于传统的奇异值阈值法。
{"title":"Concurrent Clutter and Noise Suppression via Low Rank Plus Sparse Optimization for Non-Contrast Ultrasound Flow Doppler Processing in Microvasculature","authors":"Mahdi Bayat, M. Fatemi","doi":"10.1109/ICASSP.2018.8461638","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461638","url":null,"abstract":"A low rank plus sparse framework for concurrent clutter and noise suppression in Doppler processing of echo ensembles obtained by non-contrast ultrasound imaging is presented. A low rank component which represents mostly strong tissue clutter signal and a sparse component which represents mostly blood echoes received from slow flows in microvasculature are assumed. The proposed method is applied to simulated data and its superior performance over conventional singular value thresholding in removing clutter and background noise is presented.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"57 1","pages":"1080-1084"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89045626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Online Multi-Kernel Learning with Orthogonal Random Features 基于正交随机特征的在线多核学习
Yanning Shen, Tianyi Chen, G. Giannakis
Kernel-based methods have well-appreciated performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ Performance analysis shows that the novel algorithm can afford sublinear regret. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.
基于核的方法在各种非线性学习任务中具有良好的性能。它们中的大多数依赖于预先选择的内核,其谨慎的选择假定了特定于任务的先验信息。为了克服这一限制,多核学习因其从指定的核字典中选择核的灵活性而受到欢迎。利用随机特征近似及其最近的正交性促进变体,本贡献开发了一种在线多核学习方案,以动态地推断预期的非线性函数。性能分析表明,该算法能够承受次线性后悔。在实际数据集上进行了数值测试,验证了所提算法的有效性。
{"title":"Online Multi-Kernel Learning with Orthogonal Random Features","authors":"Yanning Shen, Tianyi Chen, G. Giannakis","doi":"10.1109/ICASSP.2018.8461509","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461509","url":null,"abstract":"Kernel-based methods have well-appreciated performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ Performance analysis shows that the novel algorithm can afford sublinear regret. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"11 1","pages":"6289-6293"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90154936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks 一种确定深度神经网络最小逐层精度的解析方法
Charbel Sakr, Naresh R Shanbhag
There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.
人们对将深度学习系统部署到资源受限的平台上以实现快速高效的推理越来越感兴趣。然而,典型的模型非常复杂,使得这种集成非常具有挑战性,并且需要压缩机制,例如降低精度。我们提出了一种分层粒度精度分析,使我们能够以最小的精度退化成本有效地量化预训练的深度神经网络。我们的结果与最近的发现一致,即早期层的扰动最具破坏性,因此需要比后期层更精确。MNIST和CIFAR-10数据集上的数值结果表明,我们的方法可以显著降低复杂性。实际上,对于同等级别的精度,我们的细粒度方法将网络中的最小精度降低到8位,而不是简单的统一分配。此外,我们达到了最先进的二进制网络的精度水平,同时需要高达3.5倍的低复杂度。同样,与最先进的定点网络相比,复杂性节省甚至更高(高达14倍),而精度没有损失。
{"title":"An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks","authors":"Charbel Sakr, Naresh R Shanbhag","doi":"10.1109/ICASSP.2018.8461702","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461702","url":null,"abstract":"There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"124 1","pages":"1090-1094"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88035638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Harnessing Bandit Online Learning to Low-Latency Fog Computing 利用强盗在线学习低延迟雾计算
Tianyi Chen, G. Giannakis
This paper focuses on the online fog computing tasks in the Internet-of-Things (IoT), where online decisions must flexibly adapt to the changing user preferences (loss functions), and the temporally unpredictable availability of resources (constraints). Tailored for such human-in-the-loop systems where the loss functions are hard to model, a family of bandit online saddle-point (BanSP) schemes are developed, which adaptively adjust the online operations based on (possibly multiple) bandit feedback of the loss functions, and the changing environment. Performance here is assessed by: i) dynamic regret that generalizes the widely used static regret; and, ii) fit that captures the accumulated amount of constraint violations. Specifically, BanSP is proved to simultaneously yield sub-linear dynamic regret and fit, provided that the best dynamic solutions vary slowly over time. Numerical tests on fog computing tasks corroborate that BanSP offers desired performance under such limited information.
本文主要研究物联网(IoT)中的在线雾计算任务,其中在线决策必须灵活地适应不断变化的用户偏好(损失函数)和暂时不可预测的资源可用性(约束)。针对这种损失函数难以建模的人在环系统,开发了一种基于(可能是多个)损失函数的强盗反馈和变化的环境自适应调整在线操作的强盗在线鞍点(BanSP)方案。这里的性能评估方法是:i)动态后悔,它概括了广泛使用的静态后悔;ii)捕获约束违规累积量的拟合。具体来说,当最佳动态解随时间缓慢变化时,证明了BanSP同时产生亚线性动态遗憾和拟合。对雾计算任务的数值测试证实了BanSP在如此有限的信息下提供了理想的性能。
{"title":"Harnessing Bandit Online Learning to Low-Latency Fog Computing","authors":"Tianyi Chen, G. Giannakis","doi":"10.1109/ICASSP.2018.8461641","DOIUrl":"https://doi.org/10.1109/ICASSP.2018.8461641","url":null,"abstract":"This paper focuses on the online fog computing tasks in the Internet-of-Things (IoT), where online decisions must flexibly adapt to the changing user preferences (loss functions), and the temporally unpredictable availability of resources (constraints). Tailored for such human-in-the-loop systems where the loss functions are hard to model, a family of bandit online saddle-point (BanSP) schemes are developed, which adaptively adjust the online operations based on (possibly multiple) bandit feedback of the loss functions, and the changing environment. Performance here is assessed by: i) dynamic regret that generalizes the widely used static regret; and, ii) fit that captures the accumulated amount of constraint violations. Specifically, BanSP is proved to simultaneously yield sub-linear dynamic regret and fit, provided that the best dynamic solutions vary slowly over time. Numerical tests on fog computing tasks corroborate that BanSP offers desired performance under such limited information.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"6418-6422"},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80240597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1