首页 > 最新文献

Found. Trends Signal Process.最新文献

英文 中文
Biomedical Image Reconstruction: From the Foundations to Deep Neural Networks 生物医学图像重建:从基础到深度神经网络
Pub Date : 2019-01-11 DOI: 10.1561/2000000101
Michael T. McCann, M. Unser
This tutorial covers biomedical image reconstruction, from the foundational concepts of system modeling and direct reconstruction to modern sparsity and learning-based approaches. Imaging is a critical tool in biological research and medicine, and most imaging systems necessarily use an image-reconstruction algorithm to create an image; the design of these algorithms has been a topic of research since at least the 1960's. In the last few years, machine learning-based approaches have shown impressive performance on image reconstruction problems, triggering a wave of enthusiasm and creativity around the paradigm of learning. Our goal is to unify this body of research, identifying common principles and reusable building blocks across decades and among diverse imaging modalities. We first describe system modeling, emphasizing how a few building blocks can be used to describe a broad range of imaging modalities. We then discuss reconstruction algorithms, grouping them into three broad generations. The first are the classical direct methods, including Tikhonov regularization; the second are the variational methods based on sparsity and the theory of compressive sensing; and the third are the learning-based (also called data-driven) methods, especially those using deep convolutional neural networks. There are strong links between these generations: classical (first-generation) methods appear as modules inside the latter two, and the former two are used to inspire new designs for learning-based (third-generation) methods. As a result, a solid understanding of all of three generations is necessary for the design of state-of-the-art algorithms.
本教程涵盖生物医学图像重建,从系统建模和直接重建的基本概念到现代稀疏性和基于学习的方法。成像是生物研究和医学的关键工具,大多数成像系统都必须使用图像重建算法来创建图像;这些算法的设计至少从20世纪60年代起就一直是研究的主题。在过去的几年里,基于机器学习的方法在图像重建问题上表现出了令人印象深刻的表现,引发了一波围绕学习范式的热情和创造力。我们的目标是统一这一研究体系,确定几十年来不同成像模式的共同原则和可重复使用的构建模块。我们首先描述系统建模,强调如何使用几个构建模块来描述广泛的成像模式。然后我们讨论重建算法,将它们分为三代。第一种是经典的直接方法,包括Tikhonov正则化;二是基于稀疏度和压缩感知理论的变分方法;第三种是基于学习(也称为数据驱动)的方法,特别是那些使用深度卷积神经网络的方法。这两代方法之间有很强的联系:经典(第一代)方法作为后两代方法的模块出现,前两代方法用于启发基于学习的(第三代)方法的新设计。因此,在设计最先进的算法时,对所有三代的深刻理解是必要的。
{"title":"Biomedical Image Reconstruction: From the Foundations to Deep Neural Networks","authors":"Michael T. McCann, M. Unser","doi":"10.1561/2000000101","DOIUrl":"https://doi.org/10.1561/2000000101","url":null,"abstract":"This tutorial covers biomedical image reconstruction, from the foundational concepts of system modeling and direct reconstruction to modern sparsity and learning-based approaches. \u0000Imaging is a critical tool in biological research and medicine, and most imaging systems necessarily use an image-reconstruction algorithm to create an image; the design of these algorithms has been a topic of research since at least the 1960's. In the last few years, machine learning-based approaches have shown impressive performance on image reconstruction problems, triggering a wave of enthusiasm and creativity around the paradigm of learning. Our goal is to unify this body of research, identifying common principles and reusable building blocks across decades and among diverse imaging modalities. \u0000We first describe system modeling, emphasizing how a few building blocks can be used to describe a broad range of imaging modalities. We then discuss reconstruction algorithms, grouping them into three broad generations. The first are the classical direct methods, including Tikhonov regularization; the second are the variational methods based on sparsity and the theory of compressive sensing; and the third are the learning-based (also called data-driven) methods, especially those using deep convolutional neural networks. There are strong links between these generations: classical (first-generation) methods appear as modules inside the latter two, and the former two are used to inspire new designs for learning-based (third-generation) methods. As a result, a solid understanding of all of three generations is necessary for the design of state-of-the-art algorithms.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"2013 1","pages":"283-359"},"PeriodicalIF":0.0,"publicationDate":"2019-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89516482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Survey on the Low-Dimensional-Model-based Electromagnetic Imaging 基于低维模型的电磁成像研究进展
Pub Date : 2018-06-05 DOI: 10.1561/2000000103
Lianlin Li, M. Hurtado, F. Xu, Bing Zhang, T. Jin, Tie Jun Xui, M. Stevanovic, A. Nehorai
The low-dimensional-model-based electromagnetic imaging is an emerging member of the big family of computational imaging, by which the low-dimensional models of underlying signals are incorporated into both data acquisition systems and reconstruction algorithms for electromagnetic imaging, in order to improve the imaging performance and break the bottleneck of existing electromagnetic imaging methodologies. Over the past decade, we have witnessed profound impacts of the low-dimensional models on electromagnetic imaging. However, the low-dimensional-model-based electromagnetic imaging remains at its early stage, and many Lianlin Li, Martin Hurtado, Feng Xu, Bing Chen Zhang, Tian Jin, Tie Jun Cui, Marija Nikolic Stevanovic and Arye Nehorai (2018), “A Survey on the LowDimensional-Model-based Electromagnetic Imaging”, : Vol. 12, No. 2, pp 107–199. DOI: 10.1561/2000000103.
基于低维模型的电磁成像是计算成像大家族中的一个新兴成员,它将底层信号的低维模型纳入到电磁成像的数据采集系统和重构算法中,以提高成像性能,突破现有电磁成像方法的瓶颈。在过去的十年中,我们见证了低维模型对电磁成像的深远影响。然而,基于低维模型的电磁成像仍处于早期阶段,许多李连林,Martin Hurtado,徐峰,张兵,靳田,崔铁军,Marija Nikolic Stevanovic和Arye Nehorai(2018),“基于低维模型的电磁成像综述”,Vol. 12, No. 2, pp 107-199。DOI: 10.1561 / 2000000103。
{"title":"A Survey on the Low-Dimensional-Model-based Electromagnetic Imaging","authors":"Lianlin Li, M. Hurtado, F. Xu, Bing Zhang, T. Jin, Tie Jun Xui, M. Stevanovic, A. Nehorai","doi":"10.1561/2000000103","DOIUrl":"https://doi.org/10.1561/2000000103","url":null,"abstract":"The low-dimensional-model-based electromagnetic imaging is an emerging member of the big family of computational imaging, by which the low-dimensional models of underlying signals are incorporated into both data acquisition systems and reconstruction algorithms for electromagnetic imaging, in order to improve the imaging performance and break the bottleneck of existing electromagnetic imaging methodologies. Over the past decade, we have witnessed profound impacts of the low-dimensional models on electromagnetic imaging. However, the low-dimensional-model-based electromagnetic imaging remains at its early stage, and many Lianlin Li, Martin Hurtado, Feng Xu, Bing Chen Zhang, Tian Jin, Tie Jun Cui, Marija Nikolic Stevanovic and Arye Nehorai (2018), “A Survey on the LowDimensional-Model-based Electromagnetic Imaging”, : Vol. 12, No. 2, pp 107–199. DOI: 10.1561/2000000103.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"102 1","pages":"107-199"},"PeriodicalIF":0.0,"publicationDate":"2018-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75980788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Synchronization and Localization in Wireless Networks 无线网络中的同步与定位
Pub Date : 2018-03-29 DOI: 10.1561/2000000096
B. Etzlinger, H. Wymeersch
This review addresses the role of synchronization in the radio localization problem, and provides a comprehensive overview of recent developments suitable for current and future practical implementations. The material is intended for both, theoreticians and practitioners, and is written to be accessible to novices, while covering state-of-the-art topics, of interest to advanced researchers of localization and synchronization systems. Several widely-used radio localization systems, such as GPS and cellular localization, rely on time-of-flight measurements of data-bearing signals to determine inter-radio distances. For such measurements to be meaningful, accurate synchronization is required. While existing systems use a highly synchronous infrastructure, such as GPS where satellites are equipped with atomic clocks or cellular localization where base stations are GPS synchronized, most other wireless networks do not have an sufficiently accurate common notion of time across the nodes. Synchronization, either at link or network level, thus has a principal role in localization systems. This role is expected to become more important in view of recent trends in high-precision and distributed localization, as well as future communication standards, such as 5G indoor localization when access points can not be externally synchronized. Since synchronization is generally treated separately from localization, there is a need to harmonize these two fundamental problems, especially in the decentralized network context. In this monograph, we revisit the role of synchronization in radio localization and provide an exposition of its relation to the general network localization problem. After an introduction of basic concepts, models, and network inference methods, we contrast two-step approaches with single-step (simultaneous) synchronization and localization. These approaches are discussed in terms of their methodology and fundamental limitations. Our focus is on techniques that consider practical relevant clock, delay, and measurement models in order to guide the reader from physical observations to statistical estimation techniques. The presented methods apply to networks with asynchronous localization infrastructure and/or to cooperative ad-hoc networks.
这篇综述讨论了同步在无线电定位问题中的作用,并提供了适合当前和未来实际实现的最新发展的全面概述。该材料的目的是为理论家和实践者,并写了可访问的新手,同时涵盖了国家的最先进的主题,感兴趣的本地化和同步系统的高级研究人员。一些广泛使用的无线电定位系统,如GPS和蜂窝定位,依赖于对数据承载信号的飞行时间测量来确定无线电间距离。要使这些测量有意义,就需要精确的同步。虽然现有系统使用高度同步的基础设施,例如GPS卫星配备原子钟或蜂窝定位,基站与GPS同步,但大多数其他无线网络没有足够准确的跨节点时间的共同概念。因此,在链路或网络级别上的同步在定位系统中起着主要作用。鉴于高精度和分布式定位的最新趋势,以及未来的通信标准,例如在接入点无法外部同步的情况下,5G室内定位,预计这一作用将变得更加重要。由于同步通常与本地化分开处理,因此需要协调这两个基本问题,特别是在分散的网络环境中。在这篇专著中,我们重新审视了同步在无线电定位中的作用,并阐述了它与一般网络定位问题的关系。在介绍了基本概念、模型和网络推理方法之后,我们将两步方法与单步(同时)同步和定位方法进行了比较。这些方法在方法论和基本限制方面进行了讨论。我们的重点是考虑实际相关时钟,延迟和测量模型的技术,以便引导读者从物理观察到统计估计技术。所提出的方法适用于具有异步定位基础结构的网络和/或协作的ad-hoc网络。
{"title":"Synchronization and Localization in Wireless Networks","authors":"B. Etzlinger, H. Wymeersch","doi":"10.1561/2000000096","DOIUrl":"https://doi.org/10.1561/2000000096","url":null,"abstract":"This review addresses the role of synchronization in the radio localization problem, and provides a comprehensive overview of recent developments suitable for current and future practical implementations. The material is intended for both, theoreticians and practitioners, and is written to be accessible to novices, while covering state-of-the-art topics, of interest to advanced researchers of localization and synchronization systems. Several widely-used radio localization systems, such as GPS and cellular localization, rely on time-of-flight measurements of data-bearing signals to determine inter-radio distances. For such measurements to be meaningful, accurate synchronization is required. While existing systems use a highly synchronous infrastructure, such as GPS where satellites are equipped with atomic clocks or cellular localization where base stations are GPS synchronized, most other wireless networks do not have an sufficiently accurate common notion of time across the nodes. Synchronization, either at link or network level, thus has a principal role in localization systems. This role is expected to become more important in view of recent trends in high-precision and distributed localization, as well as future communication standards, such as 5G indoor localization when access points can not be externally synchronized. Since synchronization is generally treated separately from localization, there is a need to harmonize these two fundamental problems, especially in the decentralized network context. In this monograph, we revisit the role of synchronization in radio localization and provide an exposition of its relation to the general network localization problem. After an introduction of basic concepts, models, and network inference methods, we contrast two-step approaches with single-step (simultaneous) synchronization and localization. These approaches are discussed in terms of their methodology and fundamental limitations. Our focus is on techniques that consider practical relevant clock, delay, and measurement models in order to guide the reader from physical observations to statistical estimation techniques. The presented methods apply to networks with asynchronous localization infrastructure and/or to cooperative ad-hoc networks.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"110 1","pages":"1-106"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81737221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency 大规模MIMO网络:频谱、能量和硬件效率
Pub Date : 2018-01-03 DOI: 10.1561/2000000093
Emil Björnson, J. Hoydis, L. Sanguinetti
Massive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic "wisdom" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.
大规模多输入多输出MIMO是下一代无线通信网络中最有前途的技术之一,因为它有可能在频谱效率SE和能效EE方面提供改变游戏规则的改进。这本专著总结了多年的研究见解在一个清晰和独立的方式,并提供了必要的知识和数学工具的读者进行独立研究在这一领域。本专著从大规模MIMO的严格定义出发,涵盖了信道估计、SE、EE、硬件效率HE和各种实际部署考虑的重要方面。从一开始,介绍了一个非常一般的,但易于处理的,具有空间信道相关的规范系统模型。该模型用于实际评估SE和EE,后来扩展到也包括硬件损伤的影响。由于这种严格的建模方法,许多关于大规模MIMO的经典“智慧”,基于过于简单的系统模型,被证明是值得怀疑的。
{"title":"Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency","authors":"Emil Björnson, J. Hoydis, L. Sanguinetti","doi":"10.1561/2000000093","DOIUrl":"https://doi.org/10.1561/2000000093","url":null,"abstract":"Massive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic \"wisdom\" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"259 1","pages":"154-655"},"PeriodicalIF":0.0,"publicationDate":"2018-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77140742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1218
Computational Visual Attention Models 计算视觉注意模型
Pub Date : 2017-06-27 DOI: 10.1561/2000000055
Milind S. Gide, Lina Karam
The human visual system (HVS) has evolved to have the ability to selectively focus on the most relevant parts of a visual scene. This mechanism, referred to as visual attention (VA), has been the focus of several neurological and psychological studies in the past few decades. These studies have inspired several computational VA models which have been successfully applied to problems in computer vision and robotics. In this paper we provide a comprehensive survey of the state-of-the-art in computational VA modeling with a special focus on the latest trends. We review several models published since 2012. We also discuss theoretical advantages and disadvantages of each approach. In addition, we describe existing methodologies to evaluate computational models through the use of eye-tracking data along with the VA performance metrics used. We also discuss shortcomings in existing approaches and describe approaches to overcome these shortcomings. A recent subjective evaluation for benchmarking existing VA metrics is also presented and open problems in VA are discussed. M. S. Gide and L. J. Karam Computational Visual Attention Models. Foundations and Trends © in Signal Processing, vol. 10, no. 4, pp. 347–427, 2016. DOI: 10.1561/2000000055.
人类视觉系统(HVS)已经进化到有选择地关注视觉场景中最相关的部分的能力。这种机制被称为视觉注意(VA),在过去的几十年里一直是神经学和心理学研究的焦点。这些研究启发了一些计算式视觉模型,这些模型已经成功地应用于计算机视觉和机器人问题。在本文中,我们提供了一个全面的调查,在计算VA建模的国家的最新趋势特别关注。我们回顾了自2012年以来发布的几个模型。我们还讨论了每种方法的理论优缺点。此外,我们描述了现有的方法,通过使用眼动追踪数据以及使用的VA性能指标来评估计算模型。我们还讨论了现有方法的缺点,并描述了克服这些缺点的方法。本文还介绍了最近对现有VA指标进行基准测试的主观评价,并讨论了VA中存在的问题。M. S. Gide和L. J. Karam计算视觉注意模型。基础与趋势©in Signal Processing, vol. 10, no. 5。4, pp. 347-427, 2016。DOI: 10.1561 / 2000000055。
{"title":"Computational Visual Attention Models","authors":"Milind S. Gide, Lina Karam","doi":"10.1561/2000000055","DOIUrl":"https://doi.org/10.1561/2000000055","url":null,"abstract":"The human visual system (HVS) has evolved to have the ability to selectively focus on the most relevant parts of a visual scene. This mechanism, referred to as visual attention (VA), has been the focus of several neurological and psychological studies in the past few decades. These studies have inspired several computational VA models which have been successfully applied to problems in computer vision and robotics. In this paper we provide a comprehensive survey of the state-of-the-art in computational VA modeling with a special focus on the latest trends. We review several models published since 2012. We also discuss theoretical advantages and disadvantages of each approach. In addition, we describe existing methodologies to evaluate computational models through the use of eye-tracking data along with the VA performance metrics used. We also discuss shortcomings in existing approaches and describe approaches to overcome these shortcomings. A recent subjective evaluation for benchmarking existing VA metrics is also presented and open problems in VA are discussed. M. S. Gide and L. J. Karam Computational Visual Attention Models. Foundations and Trends © in Signal Processing, vol. 10, no. 4, pp. 347–427, 2016. DOI: 10.1561/2000000055.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"5 1","pages":"347-427"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86999909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Video Coding: Part II of Fundamentals of Source and Video Coding 视频编码:源和视频编码基础的第二部分
Pub Date : 2016-12-14 DOI: 10.1561/2000000078
T. Wiegand, H. Schwarz
Video Coding is the second part of the two-part monograph Fundamentals of Source and Video Coding by Wiegand and Schwarz. This part describes the application of the techniques described in the first part to video coding. In doing so it provides a description of the fundamentals concepts of video coding and, in particular, the signal processing in video encoders and decoders.
视频编码是Wiegand和Schwarz的两部分专著《源和视频编码基础》的第二部分。这一部分描述了第一部分中描述的技术在视频编码中的应用。在这样做时,它提供了视频编码的基本概念的描述,特别是视频编码器和解码器中的信号处理。
{"title":"Video Coding: Part II of Fundamentals of Source and Video Coding","authors":"T. Wiegand, H. Schwarz","doi":"10.1561/2000000078","DOIUrl":"https://doi.org/10.1561/2000000078","url":null,"abstract":"Video Coding is the second part of the two-part monograph Fundamentals of Source and Video Coding by Wiegand and Schwarz. This part describes the application of the techniques described in the first part to video coding. In doing so it provides a description of the fundamentals concepts of video coding and, in particular, the signal processing in video encoders and decoders.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"29 1","pages":"1-346"},"PeriodicalIF":0.0,"publicationDate":"2016-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73651878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Sparse Sensing for Statistical Inference 稀疏感知用于统计推断
Pub Date : 2016-12-14 DOI: 10.1561/2000000069
S. P. Chepuri, G. Leus
In today’s society, we are flooded with massive volumes of data in the order of a billion gigabytes on a daily basis from pervasive sensors. It is becoming increasingly challenging to sense, store, transport, or process (i.e., for inference) the acquired data. To alleviate these problems, it is evident that there is an urgent need to significantly reduce the sensing cost (i.e., the number of expensive sensors) as well as the related memory and bandwidth requirements by developing unconventional sensing mechanisms to extract as much information as possible yet collecting fewer data. The aim of this monograph is therefore to develop theory and algorithms for smart data reduction. We develop a data reduction tool called sparse sensing, which consists of a deterministic and structured sensing function (guided by a sparse vector) that is optimally designed to achieve a desired inference performance with the reduced number of data samples. We develop sparse sensing mechanisms, convex programs, and greedy algorithms to efficiently design sparse sensing functions, where we assume that the data is not yet available and the model information is perfectly known. Sparse sensing offers a number of advantages over compressed sensing (a state-of-the-art data reduction method for sparse signal recovery). One of the major differences is that in sparse sensing the underlying signals need not be sparse. This allows for general signal processing tasks (not just sparse signal recovery) under the proposed sparse sensing framework. Specifically, we focus on fundamental statistical inference tasks, like estimation, filtering, and detection. In essence, we present topics that transform classical (e.g., random or uniform) sensing methods to low-cost data acquisition mechanisms tailored for specific inference tasks. The developed framework can be applied to sensor selection, sensor placement, or sensor scheduling, for example. S.P. Chepuri and G. Leus. Sparse Sensing for Statistical Inference. Foundations and Trends R © in Signal Processing, vol. 9, no. 3-4, pp. 233–386, 2015. DOI: 10.1561/2000000069. Full text available at: http://dx.doi.org/10.1561/2000000069
在当今社会,我们每天都被来自无处不在的传感器的大量数据所淹没,这些数据的数量级达到10亿千兆字节。对获取的数据进行感知、存储、传输或处理(即进行推理)正变得越来越具有挑战性。为了缓解这些问题,显然迫切需要通过开发非常规的传感机制来显著降低传感成本(即昂贵传感器的数量)以及相关的内存和带宽要求,以提取尽可能多的信息,同时收集尽可能少的数据。因此,本专著的目的是为智能数据简化开发理论和算法。我们开发了一种称为稀疏感知的数据约简工具,它由确定性和结构化感知函数(由稀疏向量引导)组成,该函数经过优化设计,可以在减少数据样本数量的情况下实现所需的推理性能。我们开发了稀疏感知机制,凸程序和贪婪算法来有效地设计稀疏感知函数,其中我们假设数据尚未可用并且模型信息是完全已知的。与压缩感知(一种用于稀疏信号恢复的最新数据缩减方法)相比,稀疏感知提供了许多优点。其中一个主要的区别是,在稀疏传感中,底层信号不需要稀疏。这允许在提出的稀疏感知框架下进行一般的信号处理任务(不仅仅是稀疏信号恢复)。具体来说,我们专注于基本的统计推理任务,如估计、过滤和检测。从本质上讲,我们提出了将经典(例如,随机或均匀)传感方法转换为针对特定推理任务量身定制的低成本数据采集机制的主题。例如,开发的框架可以应用于传感器选择、传感器放置或传感器调度。S.P. Chepuri和G. Leus。稀疏感知用于统计推断。基础与趋势R©in Signal Processing, vol. 9, no. 5。3-4, pp. 233-386, 2015。DOI: 10.1561 / 2000000069。全文可在:http://dx.doi.org/10.1561/2000000069
{"title":"Sparse Sensing for Statistical Inference","authors":"S. P. Chepuri, G. Leus","doi":"10.1561/2000000069","DOIUrl":"https://doi.org/10.1561/2000000069","url":null,"abstract":"In today’s society, we are flooded with massive volumes of data in the order of a billion gigabytes on a daily basis from pervasive sensors. It is becoming increasingly challenging to sense, store, transport, or process (i.e., for inference) the acquired data. To alleviate these problems, it is evident that there is an urgent need to significantly reduce the sensing cost (i.e., the number of expensive sensors) as well as the related memory and bandwidth requirements by developing unconventional sensing mechanisms to extract as much information as possible yet collecting fewer data. The aim of this monograph is therefore to develop theory and algorithms for smart data reduction. We develop a data reduction tool called sparse sensing, which consists of a deterministic and structured sensing function (guided by a sparse vector) that is optimally designed to achieve a desired inference performance with the reduced number of data samples. We develop sparse sensing mechanisms, convex programs, and greedy algorithms to efficiently design sparse sensing functions, where we assume that the data is not yet available and the model information is perfectly known. Sparse sensing offers a number of advantages over compressed sensing (a state-of-the-art data reduction method for sparse signal recovery). One of the major differences is that in sparse sensing the underlying signals need not be sparse. This allows for general signal processing tasks (not just sparse signal recovery) under the proposed sparse sensing framework. Specifically, we focus on fundamental statistical inference tasks, like estimation, filtering, and detection. In essence, we present topics that transform classical (e.g., random or uniform) sensing methods to low-cost data acquisition mechanisms tailored for specific inference tasks. The developed framework can be applied to sensor selection, sensor placement, or sensor scheduling, for example. S.P. Chepuri and G. Leus. Sparse Sensing for Statistical Inference. Foundations and Trends R © in Signal Processing, vol. 9, no. 3-4, pp. 233–386, 2015. DOI: 10.1561/2000000069. Full text available at: http://dx.doi.org/10.1561/2000000069","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"88 2 1","pages":"233-368"},"PeriodicalIF":0.0,"publicationDate":"2016-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81346407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A Signal Processing Perspective of Financial Engineering 金融工程的信号处理视角
Pub Date : 2016-08-09 DOI: 10.1561/2000000072
Yiyong Feng, D. Palomar
Despite the different nature of financial engineering and electrical engineering, both areas are intimately connected on a mathematical level. The foundations of financial engineering lie on the statistical analysis of numerical time series and the modeling of the behavior of the financial markets in order to perform predictions and systematically optimize investment strategies. Similarly, the foundations of electrical engineering, for instance, wireless communication systems, lie on statistical signal processing and the modeling of communication channels in order to perform predictions and systematically optimize transmission strategies. Both foundations are the same in disguise. It is often the case in science that the same or very similar methodologies are developed and applied independently in different areas. A Signal Processing Perspective of Financial Engineering is about investment in financial assets treated as a signal processing and optimization problem. It explores such connections and capitalizes on the existing mathematical tools developed in wireless communications and signal processing to solve real-life problems arising in the financial markets in an unprecedented way. A Signal Processing Perspective of Financial Engineering provides straightforward and systematic access to financial engineering for researchers in signal processing and communications so that they can understand problems in financial engineering more easily and may even apply signal processing techniques to handle some financial problems.
尽管金融工程和电气工程的性质不同,但这两个领域在数学层面上是密切相关的。金融工程的基础在于对数值时间序列的统计分析和金融市场行为的建模,以便进行预测和系统地优化投资策略。同样,电气工程的基础,例如,无线通信系统,依赖于统计信号处理和通信信道的建模,以便进行预测和系统地优化传输策略。这两个基金会都是伪装的。在科学领域,经常会出现相同或非常相似的方法被开发出来并独立应用于不同领域的情况。金融工程的信号处理视角是把金融资产投资看作一个信号处理和优化问题。它探索了这种联系,并利用无线通信和信号处理中开发的现有数学工具,以前所未有的方式解决金融市场中出现的现实问题。《金融工程的信号处理视角》为信号处理和通信研究人员提供了一个直观、系统的金融工程途径,使他们能够更容易地理解金融工程中的问题,甚至可以应用信号处理技术来处理一些金融问题。
{"title":"A Signal Processing Perspective of Financial Engineering","authors":"Yiyong Feng, D. Palomar","doi":"10.1561/2000000072","DOIUrl":"https://doi.org/10.1561/2000000072","url":null,"abstract":"Despite the different nature of financial engineering and electrical engineering, both areas are intimately connected on a mathematical level. The foundations of financial engineering lie on the statistical analysis of numerical time series and the modeling of the behavior of the financial markets in order to perform predictions and systematically optimize investment strategies. Similarly, the foundations of electrical engineering, for instance, wireless communication systems, lie on statistical signal processing and the modeling of communication channels in order to perform predictions and systematically optimize transmission strategies. Both foundations are the same in disguise. It is often the case in science that the same or very similar methodologies are developed and applied independently in different areas. A Signal Processing Perspective of Financial Engineering is about investment in financial assets treated as a signal processing and optimization problem. It explores such connections and capitalizes on the existing mathematical tools developed in wireless communications and signal processing to solve real-life problems arising in the financial markets in an unprecedented way. A Signal Processing Perspective of Financial Engineering provides straightforward and systematic access to financial engineering for researchers in signal processing and communications so that they can understand problems in financial engineering more easily and may even apply signal processing techniques to handle some financial problems.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79044937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Deep Learning in Object Recognition, Detection, and Segmentation 对象识别、检测和分割中的深度学习
Pub Date : 2016-03-01 DOI: 10.1561/2000000071
Xiaogang Wang
As a major breakthrough in artificial intelligence, deep learning has achieved very impressive success in solving grand challenges in many fields including speech recognition, natural language processing, computer vision, image and video processing, and multimedia. This article provides a historical overview of deep learning and focus on its applications in object recognition, detection, and segmentation, which are key challenges of computer vision and have numerous applications to images and videos. The discussed research topics on object recognition include image classification on ImageNet, face recognition, and video classification. The detection part covers general object detection on ImageNet, pedestrian detection, face landmark detection face alignment, and human landmark detection pose estimation. On the segmentation side, thearticle discusses the most recent progress on scene labeling, semantic segmentation, face parsing, human parsing and saliency detection. Object recognition is considered as whole-image classification, while detection and segmentation are pixelwise classification tasks. Their fundamental differences will be discussed in this article. Fully convolutional neural networks and highly efficient forward and backward propagation algorithms specially designed for pixelwise classification task will be introduced. The covered application domains are also much diversified. Human and face images have regular structures, while general object and scene images have much more complex variations in geometric structures and layout. Videos include the temporal dimension. Therefore, they need to be processed with different deep models. All the selected domain applications have received tremendous attentions in the computer vision and multimedia communities. Through concrete examples of these applications, we explain the key points which make deep learning outperform conventional computer vision systems. 1 Different than traditional pattern recognition systems, which heavily rely on manually designed features, deep learning automatically learns hierarchical feature representations from massive training data and disentangles hidden factors of input data through multi-level nonlinear mappings. 2 Different than existing pattern recognition systems which sequentially design or train their key components, deep learning is able to jointly optimize all the components and crate synergy through close interactions among them. 3 While most machine learning models can be approximated with neural networks with shallow structures, for some tasks, the expressive power of deep models increases exponentially as their architectures go deep. Deep models are especially good at learning global contextual feature representation with their deep structures. 4 Benefitting from the large learning capacity of deep models, some classical computer vision challenges can be recast as high-dimensional data transform problems and can be solved from new perspectives. Final
作为人工智能的重大突破,深度学习在解决语音识别、自然语言处理、计算机视觉、图像和视频处理以及多媒体等许多领域的重大挑战方面取得了令人印象深刻的成功。本文提供了深度学习的历史概述,并重点介绍了其在对象识别、检测和分割方面的应用,这些都是计算机视觉的关键挑战,并且在图像和视频中有许多应用。讨论的目标识别研究课题包括基于ImageNet的图像分类、人脸识别和视频分类。检测部分包括在ImageNet上的一般目标检测、行人检测、人脸地标检测、人脸对齐、人体地标检测姿态估计。在分割方面,本文讨论了场景标注、语义分割、人脸解析、人工解析和显著性检测等方面的最新进展。目标识别被认为是整幅图像的分类,而检测和分割是像素级的分类任务。本文将讨论它们的根本区别。将介绍专门为像素分类任务设计的全卷积神经网络和高效的前向和后向传播算法。所涵盖的应用程序领域也非常多样化。人类和人脸图像具有规则的结构,而一般物体和场景图像在几何结构和布局上的变化要复杂得多。视频包含时间维度。因此,它们需要用不同的深度模型进行处理。这些领域的应用在计算机视觉和多媒体领域受到了广泛的关注。通过这些应用的具体例子,我们解释了使深度学习优于传统计算机视觉系统的关键点。1与传统模式识别系统严重依赖人工设计的特征不同,深度学习从大量训练数据中自动学习分层特征表示,并通过多层次的非线性映射来解开输入数据的隐藏因素。2与现有的模式识别系统顺序地设计或训练其关键组件不同,深度学习能够通过所有组件之间的密切交互来共同优化所有组件并形成协同效应。虽然大多数机器学习模型可以用具有浅层结构的神经网络近似,但对于某些任务,深度模型的表达能力随着其架构的深入而呈指数级增长。深度模型特别擅长用其深层结构学习全局上下文特征表示。得益于深度模型的巨大学习能力,一些经典的计算机视觉挑战可以被重新塑造为高维数据转换问题,并可以从新的角度来解决。最后,将讨论一些关于深度学习在目标识别、检测和分割方面的开放性问题和未来的工作。
{"title":"Deep Learning in Object Recognition, Detection, and Segmentation","authors":"Xiaogang Wang","doi":"10.1561/2000000071","DOIUrl":"https://doi.org/10.1561/2000000071","url":null,"abstract":"As a major breakthrough in artificial intelligence, deep learning has achieved very impressive success in solving grand challenges in many fields including speech recognition, natural language processing, computer vision, image and video processing, and multimedia. This article provides a historical overview of deep learning and focus on its applications in object recognition, detection, and segmentation, which are key challenges of computer vision and have numerous applications to images and videos. The discussed research topics on object recognition include image classification on ImageNet, face recognition, and video classification. The detection part covers general object detection on ImageNet, pedestrian detection, face landmark detection face alignment, and human landmark detection pose estimation. On the segmentation side, thearticle discusses the most recent progress on scene labeling, semantic segmentation, face parsing, human parsing and saliency detection. Object recognition is considered as whole-image classification, while detection and segmentation are pixelwise classification tasks. Their fundamental differences will be discussed in this article. Fully convolutional neural networks and highly efficient forward and backward propagation algorithms specially designed for pixelwise classification task will be introduced. The covered application domains are also much diversified. Human and face images have regular structures, while general object and scene images have much more complex variations in geometric structures and layout. Videos include the temporal dimension. Therefore, they need to be processed with different deep models. All the selected domain applications have received tremendous attentions in the computer vision and multimedia communities. Through concrete examples of these applications, we explain the key points which make deep learning outperform conventional computer vision systems. 1 Different than traditional pattern recognition systems, which heavily rely on manually designed features, deep learning automatically learns hierarchical feature representations from massive training data and disentangles hidden factors of input data through multi-level nonlinear mappings. 2 Different than existing pattern recognition systems which sequentially design or train their key components, deep learning is able to jointly optimize all the components and crate synergy through close interactions among them. 3 While most machine learning models can be approximated with neural networks with shallow structures, for some tasks, the expressive power of deep models increases exponentially as their architectures go deep. Deep models are especially good at learning global contextual feature representation with their deep structures. 4 Benefitting from the large learning capacity of deep models, some classical computer vision challenges can be recast as high-dimensional data transform problems and can be solved from new perspectives. Final","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"14 1","pages":"217-382"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89327700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Structured Robust Covariance Estimation 结构稳健协方差估计
Pub Date : 2015-12-04 DOI: 10.1561/2000000053
A. Wiesel, Teng Zhang
We consider robust covariance estimation with an emphasis on Tyler’s M-estimator. This method provides accurate inference of an unknown covariance in non-standard settings, including heavy-tailed distributions and outlier contaminated scenarios. We begin with a survey of the estimator and its various derivations in the classical unconstrained settings. The latter rely on the theory of g-convex analysis which we briefly review. Building on this background, we enhance robust covariance estimation via g-convex regularization, and allow accurate inference using a smaller number of samples. We consider shrinkage, diagonal loading, and prior knowledge in the form of symmetry and Kronecker structures. We introduce these concepts to the world of robust covariance estimation, and demonstrate how to exploit them in a computationally and statistically efficient manner. A. Wiesel and T. Zhang. Structured Robust Covariance Estimation. Foundations and Trends © in Signal Processing, vol. 8, no. 3, pp. 127–216, 2014. DOI: 10.1561/2000000053. Full text available at: http://dx.doi.org/10.1561/2000000053
我们考虑稳健协方差估计,重点是泰勒的m估计。该方法在非标准设置中提供了未知协方差的准确推断,包括重尾分布和离群值污染场景。我们首先概述了经典无约束条件下的估计量及其各种推导。后者依赖于我们简要回顾的g-凸分析理论。在此背景下,我们通过g-凸正则化增强了鲁棒协方差估计,并允许使用更少的样本进行准确的推断。我们考虑收缩、对角线加载和对称和克罗内克结构形式的先验知识。我们将这些概念引入稳健协方差估计的世界,并演示如何以计算和统计有效的方式利用它们。A. Wiesel和T. Zhang。结构稳健协方差估计。基础与趋势©in Signal Processing, vol. 8, no. 5。3,第127-216页,2014。DOI: 10.1561 / 2000000053。全文可在:http://dx.doi.org/10.1561/2000000053
{"title":"Structured Robust Covariance Estimation","authors":"A. Wiesel, Teng Zhang","doi":"10.1561/2000000053","DOIUrl":"https://doi.org/10.1561/2000000053","url":null,"abstract":"We consider robust covariance estimation with an emphasis on Tyler’s M-estimator. This method provides accurate inference of an unknown covariance in non-standard settings, including heavy-tailed distributions and outlier contaminated scenarios. We begin with a survey of the estimator and its various derivations in the classical unconstrained settings. The latter rely on the theory of g-convex analysis which we briefly review. Building on this background, we enhance robust covariance estimation via g-convex regularization, and allow accurate inference using a smaller number of samples. We consider shrinkage, diagonal loading, and prior knowledge in the form of symmetry and Kronecker structures. We introduce these concepts to the world of robust covariance estimation, and demonstrate how to exploit them in a computationally and statistically efficient manner. A. Wiesel and T. Zhang. Structured Robust Covariance Estimation. Foundations and Trends © in Signal Processing, vol. 8, no. 3, pp. 127–216, 2014. DOI: 10.1561/2000000053. Full text available at: http://dx.doi.org/10.1561/2000000053","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"22 1","pages":"127-216"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81742773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
期刊
Found. Trends Signal Process.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1