首页 > 最新文献

IEEE journal on selected areas in information theory最新文献

英文 中文
Machine Learning-Aided Efficient Decoding of Reed–Muller Subcodes Reed-Muller子码的机器学习辅助高效解码
Pub Date : 2023-07-25 DOI: 10.1109/JSAIT.2023.3298362
Mohammad Vahid Jamali;Xiyang Liu;Ashok Vardhan Makkuva;Hessam Mahdavifar;Sewoong Oh;Pramod Viswanath
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels and are conjectured to have a comparable performance to that of random codes in terms of scaling laws. However, such results are established assuming maximum-likelihood decoders for general code parameters. Also, RM codes only admit limited sets of rates. Efficient decoders such as successive cancellation list (SCL) decoder and recently-introduced recursive projection-aggregation (RPA) decoders are available for RM codes at finite lengths. In this paper, we focus on subcodes of RM codes with flexible rates. We first extend the RPA decoding algorithm to RM subcodes. To lower the complexity of our decoding algorithm, referred to as subRPA, we investigate different approaches to prune the projections. Next, we derive the soft-decision based version of our algorithm, called soft-subRPA, that not only improves upon the performance of subRPA but also enables a differentiable decoding algorithm. Building upon the soft-subRPA algorithm, we then provide a framework for training a machine learning (ML) model to search for good sets of projections that minimize the decoding error rate. Training our ML model enables achieving very close to the performance of full-projection decoding with a significantly smaller number of projections. We also show that the choice of the projections in decoding RM subcodes matters significantly, and our ML-aided projection pruning scheme is able to find a good selection, i.e., with negligible performance degradation compared to the full-projection case, given a reasonable number of projections.
Reed-Muller(RM)码实现了一般二进制输入无记忆对称信道的容量,并被推测在比例律方面具有与随机码相当的性能。然而,这样的结果是在假设通用代码参数的最大似然解码器的情况下建立的。此外,RM代码只允许有限的费率集。诸如连续消除列表(SCL)解码器和最近引入的递归投影聚合(RPA)解码器之类的高效解码器可用于有限长度的RM码。本文主要研究具有灵活速率的RM码的子码。我们首先将RPA解码算法扩展到RM子码。为了降低我们的解码算法(称为subRPA)的复杂性,我们研究了修剪投影的不同方法。接下来,我们导出了我们算法的基于软判决的版本,称为软subRPA,它不仅提高了subRPA的性能,而且实现了可微分解码算法。在软subRPA算法的基础上,我们提供了一个用于训练机器学习(ML)模型的框架,以搜索最小化解码错误率的良好投影集。训练我们的ML模型能够以显著较少的投影数量实现非常接近全投影解码的性能。我们还表明,在解码RM子码时,投影的选择非常重要,并且我们的ML辅助投影修剪方案能够找到一个很好的选择,即,在给定合理数量的投影的情况下,与全投影情况相比,性能退化可以忽略不计。
{"title":"Machine Learning-Aided Efficient Decoding of Reed–Muller Subcodes","authors":"Mohammad Vahid Jamali;Xiyang Liu;Ashok Vardhan Makkuva;Hessam Mahdavifar;Sewoong Oh;Pramod Viswanath","doi":"10.1109/JSAIT.2023.3298362","DOIUrl":"10.1109/JSAIT.2023.3298362","url":null,"abstract":"Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels and are conjectured to have a comparable performance to that of random codes in terms of scaling laws. However, such results are established assuming maximum-likelihood decoders for general code parameters. Also, RM codes only admit limited sets of rates. Efficient decoders such as successive cancellation list (SCL) decoder and recently-introduced recursive projection-aggregation (RPA) decoders are available for RM codes at finite lengths. In this paper, we focus on subcodes of RM codes with flexible rates. We first extend the RPA decoding algorithm to RM subcodes. To lower the complexity of our decoding algorithm, referred to as subRPA, we investigate different approaches to prune the projections. Next, we derive the soft-decision based version of our algorithm, called soft-subRPA, that not only improves upon the performance of subRPA but also enables a differentiable decoding algorithm. Building upon the soft-subRPA algorithm, we then provide a framework for training a machine learning (ML) model to search for \u0000<italic>good</i>\u0000 sets of projections that minimize the decoding error rate. Training our ML model enables achieving very close to the performance of full-projection decoding with a significantly smaller number of projections. We also show that the choice of the projections in decoding RM subcodes matters significantly, and our ML-aided projection pruning scheme is able to find a \u0000<italic>good</i>\u0000 selection, i.e., with negligible performance degradation compared to the full-projection case, given a reasonable number of projections.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"260-275"},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47958050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Algorithms for the Bee-Identification Problem 蜜蜂识别问题的有效算法
Pub Date : 2023-07-18 DOI: 10.1109/JSAIT.2023.3296077
Han Mao Kiah;Alexander Vardy;Hanwen Yao
The bee-identification problem, formally defined by Tandon, Tan, and Varshney (2019), requires the receiver to identify “bees” using a set of unordered noisy measurements. In this previous work, Tandon, Tan, and Varshney studied error exponents and showed that decoding the measurements jointly results in a significantly larger error exponent. In this work, we study algorithms related to this joint decoder. First, we demonstrate how to perform joint decoding efficiently. By reducing to the problem of finding perfect matching and minimum-cost matchings, we obtain joint decoders that run in time quadratic and cubic in the number of “bees” for the binary erasure (BEC) and binary symmetric channels (BSC), respectively. Next, by studying the matching algorithms in the context of channel coding, we further reduce the running times by using classical tools like peeling decoders and list-decoders. In particular, we show that our identifier algorithms when used with Reed-Muller codes terminate in almost linear and quadratic time for BEC and BSC, respectively. Finally, for explicit codebooks, we study when these joint decoders fail to identify the “bees” correctly. Specifically, we provide practical methods of estimating the probability of erroneous identification for given codebooks.
由Tandon, Tan和Varshney(2019)正式定义的蜜蜂识别问题要求接收器使用一组无序噪声测量来识别“蜜蜂”。在之前的工作中,Tandon、Tan和Varshney研究了误差指数,并表明联合解码测量结果会导致更大的误差指数。在这项工作中,我们研究了与该联合解码器相关的算法。首先,我们演示了如何有效地进行联合解码。通过简化到寻找完美匹配和最小代价匹配的问题,我们获得了二进制擦除(BEC)和二进制对称信道(BSC)的联合解码器,其运行时间分别为二次和三次的“蜜蜂”数量。接下来,通过研究信道编码背景下的匹配算法,我们使用剥离解码器和列表解码器等经典工具进一步减少运行时间。特别地,我们证明了我们的标识符算法在与Reed-Muller码一起使用时,分别在BEC和BSC的几乎线性和二次时间内终止。最后,对于显式密码本,我们研究了当这些联合解码器无法正确识别“蜜蜂”时。具体来说,我们提供了估计给定码本错误识别概率的实用方法。
{"title":"Efficient Algorithms for the Bee-Identification Problem","authors":"Han Mao Kiah;Alexander Vardy;Hanwen Yao","doi":"10.1109/JSAIT.2023.3296077","DOIUrl":"10.1109/JSAIT.2023.3296077","url":null,"abstract":"The bee-identification problem, formally defined by Tandon, Tan, and Varshney (2019), requires the receiver to identify “bees” using a set of unordered noisy measurements. In this previous work, Tandon, Tan, and Varshney studied error exponents and showed that decoding the measurements jointly results in a significantly larger error exponent. In this work, we study algorithms related to this joint decoder. First, we demonstrate how to perform joint decoding efficiently. By reducing to the problem of finding perfect matching and minimum-cost matchings, we obtain joint decoders that run in time quadratic and cubic in the number of “bees” for the binary erasure (BEC) and binary symmetric channels (BSC), respectively. Next, by studying the matching algorithms in the context of channel coding, we further reduce the running times by using classical tools like peeling decoders and list-decoders. In particular, we show that our identifier algorithms when used with Reed-Muller codes terminate in almost linear and quadratic time for BEC and BSC, respectively. Finally, for explicit codebooks, we study when these joint decoders fail to identify the “bees” correctly. Specifically, we provide practical methods of estimating the probability of erroneous identification for given codebooks.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"205-218"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45416529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Active Privacy-Utility Trade-Off Against Inference in Time-Series Data Sharing 时间序列数据共享中的主动隐私效用权衡
Pub Date : 2023-06-28 DOI: 10.1109/JSAIT.2023.3287929
Ecenaz Erdemir;Pier Luigi Dragotti;Deniz Gündüz
Internet of Things devices have become highly popular thanks to the services they offer. However, they also raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. We model the user’s personal information as the secret variable, to be kept private from an honest-but-curious service provider, and the useful variable, to be disclosed for utility. We consider an active learning framework, where one out of a finite set of measurement mechanisms is chosen at each time step, each revealing some information about the underlying secret and useful variables, albeit with different statistics. The measurements are taken such that the correct value of useful variable can be detected quickly, while the confidence on the secret variable remains below a predefined level. For privacy measure, we consider both the probability of correctly detecting the secret variable value and the mutual information between the secret and released data. We formulate both problems as partially observable Markov decision processes, and numerically solve by advantage actor-critic deep reinforcement learning. We evaluate the privacy-utility trade-off of the proposed policies on both the synthetic and real-world time-series datasets.
物联网设备由于其提供的服务而变得非常受欢迎。然而,它们也引起了隐私问题,因为它们与不受信任的第三方共享细粒度的时间序列用户数据。我们将用户的个人信息建模为秘密变量,对诚实但好奇的服务提供商保密,并将有用的变量建模为实用性披露。我们考虑了一个主动学习框架,在该框架中,在每个时间步长从有限的一组测量机制中选择一个,每个机制都揭示了一些关于潜在秘密和有用变量的信息,尽管统计数据不同。进行测量使得可以快速检测有用变量的正确值,同时对秘密变量的置信度保持在预定水平以下。对于隐私度量,我们既考虑了正确检测秘密变量值的概率,也考虑了秘密数据和已发布数据之间的相互信息。我们将这两个问题公式化为部分可观察的马尔可夫决策过程,并通过优因子-批评家深度强化学习进行数值求解。我们在合成和真实世界的时间序列数据集上评估了所提出的策略的隐私效用权衡。
{"title":"Active Privacy-Utility Trade-Off Against Inference in Time-Series Data Sharing","authors":"Ecenaz Erdemir;Pier Luigi Dragotti;Deniz Gündüz","doi":"10.1109/JSAIT.2023.3287929","DOIUrl":"10.1109/JSAIT.2023.3287929","url":null,"abstract":"Internet of Things devices have become highly popular thanks to the services they offer. However, they also raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. We model the user’s personal information as the secret variable, to be kept private from an honest-but-curious service provider, and the useful variable, to be disclosed for utility. We consider an active learning framework, where one out of a finite set of measurement mechanisms is chosen at each time step, each revealing some information about the underlying secret and useful variables, albeit with different statistics. The measurements are taken such that the correct value of useful variable can be detected quickly, while the confidence on the secret variable remains below a predefined level. For privacy measure, we consider both the probability of correctly detecting the secret variable value and the mutual information between the secret and released data. We formulate both problems as partially observable Markov decision processes, and numerically solve by advantage actor-critic deep reinforcement learning. We evaluate the privacy-utility trade-off of the proposed policies on both the synthetic and real-world time-series datasets.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"159-173"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49236323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
SPRT-Based Efficient Best Arm Identification in Stochastic Bandits 基于sprt的随机盗匪有效最佳臂识别
Pub Date : 2023-06-23 DOI: 10.1109/JSAIT.2023.3288988
Arpan Mukherjee;Ali Tajer
This paper investigates the best arm identification (BAI) problem in stochastic multi-armed bandits in the fixed confidence setting. The general class of the exponential family of bandits is considered. The existing algorithms for the exponential family of bandits face computational challenges. To mitigate these challenges, the BAI problem is viewed and analyzed as a sequential composite hypothesis testing task, and a framework is proposed that adopts the likelihood ratio-based tests known to be effective for sequential testing. Based on this test statistic, a BAI algorithm is designed that leverages the canonical sequential probability ratio tests for arm selection and is amenable to tractable analysis for the exponential family of bandits. This algorithm has two key features: (1) its sample complexity is asymptotically optimal, and (2) it is guaranteed to be $delta -$ PAC. Existing efficient approaches focus on the Gaussian setting and require Thompson sampling for the arm deemed the best and the challenger arm. Additionally, this paper analytically quantifies the computational expense of identifying the challenger in an existing approach. Finally, numerical experiments are provided to support the analysis.
本文研究了在固定置信度条件下随机多武装匪徒的最佳武器识别问题。考虑了土匪指数族的一般类。现有的土匪指数族算法面临计算挑战。为了缓解这些挑战,BAI问题被视为一个连续的复合假设测试任务,并进行了分析,提出了一个框架,该框架采用了已知对连续测试有效的基于似然比的测试。基于这一测试统计,设计了一种BAI算法,该算法利用正则序列概率比测试进行手臂选择,并适用于指数土匪家族的易处理分析。该算法具有两个关键特征:(1)其样本复杂度是渐近最优的;(2)保证其为$delta-$PAC。现有的有效方法侧重于高斯设置,并且需要对被认为是最好的手臂和挑战者手臂进行汤普森采样。此外,本文分析量化了现有方法中识别挑战者的计算费用。最后,通过数值实验为分析提供了支持。
{"title":"SPRT-Based Efficient Best Arm Identification in Stochastic Bandits","authors":"Arpan Mukherjee;Ali Tajer","doi":"10.1109/JSAIT.2023.3288988","DOIUrl":"10.1109/JSAIT.2023.3288988","url":null,"abstract":"This paper investigates the best arm identification (BAI) problem in stochastic multi-armed bandits in the fixed confidence setting. The general class of the exponential family of bandits is considered. The existing algorithms for the exponential family of bandits face computational challenges. To mitigate these challenges, the BAI problem is viewed and analyzed as a sequential composite hypothesis testing task, and a framework is proposed that adopts the likelihood ratio-based tests known to be effective for sequential testing. Based on this test statistic, a BAI algorithm is designed that leverages the canonical sequential probability ratio tests for arm selection and is amenable to tractable analysis for the exponential family of bandits. This algorithm has two key features: (1) its sample complexity is asymptotically optimal, and (2) it is guaranteed to be \u0000<inline-formula> <tex-math>$delta -$ </tex-math></inline-formula>\u0000PAC. Existing efficient approaches focus on the Gaussian setting and require Thompson sampling for the arm deemed the best and the challenger arm. Additionally, this paper analytically quantifies the computational expense of identifying the challenger in an existing approach. Finally, numerical experiments are provided to support the analysis.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"128-143"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44500225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dual-Blind Deconvolution for Overlaid Radar-Communications Systems 地面雷达通信系统的双盲反卷积
Pub Date : 2023-06-22 DOI: 10.1109/JSAIT.2023.3287823
Edwin Vargas;Kumar Vijay Mishra;Roman Jacome;Brian M. Sadler;Henry Arguello
The increasingly crowded spectrum has spurred the design of joint radar-communications systems that share hardware resources and efficiently use the radio frequency spectrum. We study a general spectral coexistence scenario, wherein the channels and transmit signals of both radar and communications systems are unknown at the receiver. In this dual-blind deconvolution (DBD) problem, a common receiver admits a multi-carrier wireless communications signal that is overlaid with the radar signal reflected off multiple targets. The communications and radar channels are represented by continuous-valued range-time and Doppler velocities of multiple transmission paths and multiple targets. We exploit the sparsity of both channels to solve the highly ill-posed DBD problem by casting it into a sum of multivariate atomic norms (SoMAN) minimization. We devise a semidefinite program to estimate the unknown target and communications parameters using the theories of positive-hyperoctant trigonometric polynomials (PhTP). Our theoretical analyses show that the minimum number of samples required for near-perfect recovery is dependent on the logarithm of the maximum of number of radar targets and communications paths rather than their sum. We show that our SoMAN method and PhTP formulations are also applicable to more general scenarios such as unsynchronized transmission, the presence of noise, and multiple emitters. Numerical experiments demonstrate great performance enhancements during parameter recovery under different scenarios.
日益拥挤的频谱刺激了联合雷达通信系统的设计,这种系统可以共享硬件资源并有效地利用无线电频谱。我们研究了一般的频谱共存场景,其中雷达和通信系统的信道和发射信号在接收器是未知的。在这种双盲反褶积(DBD)问题中,普通接收机接收到的多载波无线通信信号与多个目标反射的雷达信号叠加在一起。通信和雷达信道由多个传输路径和多个目标的连续距离时间和多普勒速度表示。我们利用两个信道的稀疏性来解决高度不适定的DBD问题,将其转化为多元原子规范(SoMAN)最小化的和。我们设计了一个半确定程序来估计未知目标和通信参数,使用正高八域三角多项式(PhTP)理论。我们的理论分析表明,近乎完美恢复所需的最小样本数量取决于雷达目标和通信路径的最大数量的对数,而不是它们的总和。我们展示了我们的SoMAN方法和PhTP公式也适用于更一般的场景,例如不同步传输、存在噪声和多个发射器。数值实验表明,在不同的场景下,参数恢复对性能有很大的提高。
{"title":"Dual-Blind Deconvolution for Overlaid Radar-Communications Systems","authors":"Edwin Vargas;Kumar Vijay Mishra;Roman Jacome;Brian M. Sadler;Henry Arguello","doi":"10.1109/JSAIT.2023.3287823","DOIUrl":"10.1109/JSAIT.2023.3287823","url":null,"abstract":"The increasingly crowded spectrum has spurred the design of joint radar-communications systems that share hardware resources and efficiently use the radio frequency spectrum. We study a general spectral coexistence scenario, wherein the channels and transmit signals of both radar and communications systems are unknown at the receiver. In this \u0000<italic>dual-blind deconvolution</i>\u0000 (DBD) problem, a common receiver admits a multi-carrier wireless communications signal that is overlaid with the radar signal reflected off multiple targets. The communications and radar channels are represented by \u0000<italic>continuous-valued</i>\u0000 range-time and Doppler velocities of multiple transmission paths and multiple targets. We exploit the sparsity of both channels to solve the highly ill-posed DBD problem by casting it into a sum of multivariate atomic norms (SoMAN) minimization. We devise a semidefinite program to estimate the unknown target and communications parameters using the theories of positive-hyperoctant trigonometric polynomials (PhTP). Our theoretical analyses show that the minimum number of samples required for near-perfect recovery is dependent on the logarithm of the maximum of number of radar targets and communications paths rather than their sum. We show that our SoMAN method and PhTP formulations are also applicable to more general scenarios such as unsynchronized transmission, the presence of noise, and multiple emitters. Numerical experiments demonstrate great performance enhancements during parameter recovery under different scenarios.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"75-93"},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45365256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Information-Theoretic Approach to Collaborative Integrated Sensing and Communication for Two-Transmitter Systems 双发射机系统协同集成传感与通信的信息论方法
Pub Date : 2023-06-16 DOI: 10.1109/JSAIT.2023.3286932
Mehrasa Ahmadipour;Michèle Wigger
This paper considers information-theoretic models for integrated sensing and communication (ISAC) over multi-access channels (MAC) and device-to-device (D2D) communication. The models are general and include as special cases scenarios with and without perfect or imperfect state-information at the MAC receiver as well as causal state-information at the D2D terminals. For both setups, we propose collaborative sensing ISAC schemes where terminals not only convey data to the other terminals but also state-information that they extract from their previous observations. This state-information can be exploited at the other terminals to improve their sensing performances. Indeed, as we show through examples, our schemes improve over previous non-collaborative schemes in terms of their achievable rate-distortion tradeoffs. For D2D we propose two schemes, one where compression of state information is separated from channel coding and one where it is integrated via a hybrid coding approach.
本文研究了基于多址信道(MAC)和设备对设备(D2D)通信的集成传感与通信(ISAC)的信息论模型。这些模型是通用的,并作为特殊情况包括MAC接收器有或没有完美或不完美状态信息以及D2D终端的因果状态信息。对于这两种设置,我们提出了协作感知ISAC方案,其中终端不仅向其他终端传输数据,而且还从先前的观测中提取状态信息。这些状态信息可以被其他终端利用,以提高它们的传感性能。事实上,正如我们通过例子所展示的那样,我们的方案在可实现的费率扭曲权衡方面优于以前的非协作方案。对于D2D,我们提出了两种方案,一种是将状态信息的压缩与信道编码分离,另一种是通过混合编码方法将其集成。
{"title":"An Information-Theoretic Approach to Collaborative Integrated Sensing and Communication for Two-Transmitter Systems","authors":"Mehrasa Ahmadipour;Michèle Wigger","doi":"10.1109/JSAIT.2023.3286932","DOIUrl":"10.1109/JSAIT.2023.3286932","url":null,"abstract":"This paper considers information-theoretic models for integrated sensing and communication (ISAC) over multi-access channels (MAC) and device-to-device (D2D) communication. The models are general and include as special cases scenarios with and without perfect or imperfect state-information at the MAC receiver as well as causal state-information at the D2D terminals. For both setups, we propose collaborative sensing ISAC schemes where terminals not only convey data to the other terminals but also state-information that they extract from their previous observations. This state-information can be exploited at the other terminals to improve their sensing performances. Indeed, as we show through examples, our schemes improve over previous non-collaborative schemes in terms of their achievable rate-distortion tradeoffs. For D2D we propose two schemes, one where compression of state information is separated from channel coding and one where it is integrated via a hybrid coding approach.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"112-127"},"PeriodicalIF":0.0,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47041857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Continuous-Time Modeling and Analysis of Particle Beam Metrology 粒子束计量的连续时间建模与分析
Pub Date : 2023-06-09 DOI: 10.1109/JSAIT.2023.3283911
Akshay Agarwal;Minxu Peng;Vivek K Goyal
Particle beam microscopy (PBM) performs nanoscale imaging by pixelwise capture of scalar values representing noisy measurements of the response from secondary electrons (SEs) integrated over a dwell time. Extended to metrology, goals include estimating SE yield at each pixel and detecting differences in SE yield across pixels; obstacles include shot noise in the particle source as well as lack of knowledge of and variability in the instrument response to single SEs. A recently introduced time-resolved measurement paradigm promises mitigation of source shot noise, but its analysis and development have been largely limited to estimation problems under an idealization in which SE bursts are directly and perfectly counted. Here, analyses are extended to error exponents in feature detection problems and to degraded measurements that are representative of actual instrument behavior for estimation problems. For estimation from idealized SE counts, insights on existing estimators and a superior estimator are also provided. For estimation in a realistic PBM imaging scenario, extensions to the idealized model are introduced, methods for model parameter extraction are discussed, and large improvements from time-resolved data are presented.
粒子束显微镜(PBM)通过逐像素捕获标量值来执行纳米级成像,标量值表示在停留时间内积分的二次电子(SE)响应的噪声测量。扩展到度量,目标包括估计每个像素的SE产量,并检测像素之间SE产量的差异;障碍包括粒子源中的散粒噪声,以及缺乏对单个SE的仪器响应的知识和可变性。最近引入的一种时间分辨测量范式有望减轻源散粒噪声,但其分析和发展在很大程度上局限于直接和完美地计算SE突发的理想化下的估计问题。这里,分析扩展到特征检测问题中的误差指数,以及代表估计问题的实际仪器行为的退化测量。对于理想化SE计数的估计,还提供了对现有估计量和高级估计量的见解。对于真实PBM成像场景中的估计,介绍了理想化模型的扩展,讨论了模型参数提取方法,并对时间分辨数据进行了大幅改进。
{"title":"Continuous-Time Modeling and Analysis of Particle Beam Metrology","authors":"Akshay Agarwal;Minxu Peng;Vivek K Goyal","doi":"10.1109/JSAIT.2023.3283911","DOIUrl":"10.1109/JSAIT.2023.3283911","url":null,"abstract":"Particle beam microscopy (PBM) performs nanoscale imaging by pixelwise capture of scalar values representing noisy measurements of the response from secondary electrons (SEs) integrated over a dwell time. Extended to metrology, goals include estimating SE yield at each pixel and detecting differences in SE yield across pixels; obstacles include shot noise in the particle source as well as lack of knowledge of and variability in the instrument response to single SEs. A recently introduced time-resolved measurement paradigm promises mitigation of source shot noise, but its analysis and development have been largely limited to estimation problems under an idealization in which SE bursts are directly and perfectly counted. Here, analyses are extended to error exponents in feature detection problems and to degraded measurements that are representative of actual instrument behavior for estimation problems. For estimation from idealized SE counts, insights on existing estimators and a superior estimator are also provided. For estimation in a realistic PBM imaging scenario, extensions to the idealized model are introduced, methods for model parameter extraction are discussed, and large improvements from time-resolved data are presented.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"61-74"},"PeriodicalIF":0.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43445733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sketching Low-Rank Matrices With a Shared Column Space by Convex Programming 用凸规划绘制共享列空间的低秩矩阵
Pub Date : 2023-06-07 DOI: 10.1109/JSAIT.2023.3283973
Rakshith S. Srinivasa;Seonho Kim;Kiryung Lee
In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such matrices from their noisy linear measurements. We study a convex estimator regularized by a pair of matrix norms. The measurement model corresponds to block-wise sensing and the reconstruction is possible only when the total energy is well distributed over blocks. The first norm, which is the maximum-block-Frobenius norm, favors such a solution. This condition is analogous to the notion of low-spikiness in matrix completion or column-wise sensing. The second norm, which is a tensor norm on a pair of suitable Banach spaces, induces low-rankness in the solution together with the first norm. We demonstrate that the joint estimation provides a significant gain over the individual recovery of each matrix when the number of matrices sharing a column space and the ambient dimension of the shared column space are large relative to the number of columns in each matrix. The convex estimator is cast as a semidefinite program and an efficient ADMM algorithm is derived. The empirical behavior of the convex estimator is illustrated using Monte Carlo simulations and recovery performance is compared to existing methods in the literature.
在包括遥感、多任务学习和多光谱成像在内的许多实际应用中,数据被描述为共享公共列空间的一组矩阵。我们从这些矩阵的噪声线性测量中考虑它们的联合估计。我们研究了由一对矩阵范数正则化的凸估计量。测量模型对应于逐块感测,并且只有当总能量良好地分布在块上时才可能进行重建。第一个范数,即最大块Frobenius范数,支持这样的解。这种情况类似于矩阵完成或逐列感测中的低尖峰的概念。第二个范数是一对合适的Banach空间上的张量范数,它与第一个范数一起在解中引入了低秩。我们证明,当共享列空间的矩阵数量和共享列空间中的环境维度相对于每个矩阵中的列数量较大时,联合估计比每个矩阵的单独恢复提供了显著的增益。将凸估计器转化为半定程序,导出了一个有效的ADMM算法。使用蒙特卡罗模拟说明了凸估计器的经验行为,并将恢复性能与文献中现有的方法进行了比较。
{"title":"Sketching Low-Rank Matrices With a Shared Column Space by Convex Programming","authors":"Rakshith S. Srinivasa;Seonho Kim;Kiryung Lee","doi":"10.1109/JSAIT.2023.3283973","DOIUrl":"10.1109/JSAIT.2023.3283973","url":null,"abstract":"In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such matrices from their noisy linear measurements. We study a convex estimator regularized by a pair of matrix norms. The measurement model corresponds to block-wise sensing and the reconstruction is possible only when the total energy is well distributed over blocks. The first norm, which is the maximum-block-Frobenius norm, favors such a solution. This condition is analogous to the notion of low-spikiness in matrix completion or column-wise sensing. The second norm, which is a tensor norm on a pair of suitable Banach spaces, induces low-rankness in the solution together with the first norm. We demonstrate that the joint estimation provides a significant gain over the individual recovery of each matrix when the number of matrices sharing a column space and the ambient dimension of the shared column space are large relative to the number of columns in each matrix. The convex estimator is cast as a semidefinite program and an efficient ADMM algorithm is derived. The empirical behavior of the convex estimator is illustrated using Monte Carlo simulations and recovery performance is compared to existing methods in the literature.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"54-60"},"PeriodicalIF":0.0,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43436041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Geometry of Nonconvex Spike Deconvolution From Low-Pass Measurements 低通测量非凸尖峰反卷积的局部几何
Pub Date : 2023-03-30 DOI: 10.1109/JSAIT.2023.3262689
Maxime Ferreira Da Costa;Yuejie Chi
Spike deconvolution is the problem of recovering the point sources from their convolution with a known point spread function, which plays a fundamental role in many sensing and imaging applications. In this paper, we investigate the local geometry of recovering the parameters of point sources—including both amplitudes and locations—by minimizing a natural nonconvex least-squares loss function measuring the observation residuals. We propose preconditioned variants of gradient descent (GD), where the search direction is scaled via some carefully designed preconditioning matrices. We begin with a simple fixed preconditioner design, which adjusts the learning rates of the locations at a different scale from those of the amplitudes, and show it achieves a linear rate of convergence—in terms of entrywise errors—when initialized close to the ground truth, as long as the separation between the true spikes is sufficiently large. However, the convergence rate slows down significantly when the dynamic range of the source amplitudes is large. To bridge this issue, we introduce an adaptive preconditioner design, which compensates for the learning rates of different sources in an iteration-varying manner based on the current estimate. The adaptive design provably leads to an accelerated convergence rate that is independent of the dynamic range, highlighting the benefit of adaptive preconditioning in nonconvex spike deconvolution. Numerical experiments are provided to corroborate the theoretical findings.
尖峰反卷积是用已知的点扩散函数从卷积中恢复点源的问题,在许多传感和成像应用中起着基础作用。在本文中,我们通过最小化测量观测残差的自然非凸最小二乘损失函数来研究恢复点源参数(包括振幅和位置)的局部几何。我们提出了梯度下降(GD)的预条件变体,其中搜索方向通过一些精心设计的预条件矩阵缩放。我们从一个简单的固定预调节器设计开始,它在不同的尺度上调整位置的学习率,并表明当初始化接近基本事实时,只要真实峰值之间的间隔足够大,它就可以实现线性收敛率(就入口误差而言)。但是,当源幅值的动态范围较大时,收敛速度明显减慢。为了解决这个问题,我们引入了一种自适应预调节器设计,它基于当前估计以迭代变化的方式补偿不同源的学习率。证明了自适应设计导致了与动态范围无关的加速收敛速率,突出了自适应预处理在非凸尖峰反卷积中的好处。数值实验验证了理论结果。
{"title":"Local Geometry of Nonconvex Spike Deconvolution From Low-Pass Measurements","authors":"Maxime Ferreira Da Costa;Yuejie Chi","doi":"10.1109/JSAIT.2023.3262689","DOIUrl":"10.1109/JSAIT.2023.3262689","url":null,"abstract":"Spike deconvolution is the problem of recovering the point sources from their convolution with a known point spread function, which plays a fundamental role in many sensing and imaging applications. In this paper, we investigate the local geometry of recovering the parameters of point sources—including both amplitudes and locations—by minimizing a natural nonconvex least-squares loss function measuring the observation residuals. We propose preconditioned variants of gradient descent (GD), where the search direction is scaled via some carefully designed preconditioning matrices. We begin with a simple fixed preconditioner design, which adjusts the learning rates of the locations at a different scale from those of the amplitudes, and show it achieves a linear rate of convergence—in terms of \u0000<italic>entrywise</i>\u0000 errors—when initialized close to the ground truth, as long as the separation between the true spikes is sufficiently large. However, the convergence rate slows down significantly when the dynamic range of the source amplitudes is large. To bridge this issue, we introduce an adaptive preconditioner design, which compensates for the learning rates of different sources in an iteration-varying manner based on the current estimate. The adaptive design provably leads to an accelerated convergence rate that is independent of the dynamic range, highlighting the benefit of adaptive preconditioning in nonconvex spike deconvolution. Numerical experiments are provided to corroborate the theoretical findings.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"1-15"},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41864076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the Sizes of Binary Error-Correcting Constrained Codes 二值纠错约束码的大小估计
Pub Date : 2023-03-23 DOI: 10.1109/JSAIT.2023.3279113
V. Arvind Rameshwar;Navin Kashyap
In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known linear codes that achieve vanishing probabilities of error over the binary symmetric channel (which causes bit-flip errors) and the binary erasure channel, constrained subcodes of such linear codes are also resilient to random bit-flip errors and erasures. We employ a simple identity from the Fourier analysis of Boolean functions, which transforms the problem of counting constrained codewords of linear codes to a question about the structure of the dual code. We illustrate the utility of our method in providing explicit values or efficient algorithms for our counting problem, by showing that the Fourier transform of the indicator function of the constraint is computable, for different constraints. Our second approach is to obtain good upper bounds, using an extension of Delsarte’s linear program (LP), on the largest sizes of constrained codes that can correct a fixed number of combinatorial errors or erasures. We observe that the numerical values of our LP-based upper bounds beat the generalized sphere packing bounds of Fazeli et al. (2015).
在本文中,我们研究了二进制约束码对位翻转错误和擦除的弹性。在我们的第一种方法中,我们计算线性码的约束子码的大小。由于存在众所周知的线性码,可以在二进制对称信道(导致比特翻转错误)和二进制擦除信道上实现错误消失概率,因此这种线性码的约束子码也具有抗随机比特翻转错误和擦除的弹性。利用布尔函数傅里叶分析中的一个简单恒等式,将线性码的约束码字计数问题转化为对偶码的结构问题。我们通过表明约束的指示函数的傅里叶变换对于不同的约束是可计算的,来说明我们的方法在为我们的计数问题提供显式值或有效算法方面的效用。我们的第二种方法是使用Delsarte线性规划(LP)的扩展,在可以纠正固定数量的组合错误或擦除的最大尺寸的约束代码上获得良好的上界。我们观察到,我们基于lp的上界的数值优于Fazeli等人(2015)的广义球体填充界。
{"title":"Estimating the Sizes of Binary Error-Correcting Constrained Codes","authors":"V. Arvind Rameshwar;Navin Kashyap","doi":"10.1109/JSAIT.2023.3279113","DOIUrl":"10.1109/JSAIT.2023.3279113","url":null,"abstract":"In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known linear codes that achieve vanishing probabilities of error over the binary symmetric channel (which causes bit-flip errors) and the binary erasure channel, constrained subcodes of such linear codes are also resilient to random bit-flip errors and erasures. We employ a simple identity from the Fourier analysis of Boolean functions, which transforms the problem of counting constrained codewords of linear codes to a question about the structure of the dual code. We illustrate the utility of our method in providing explicit values or efficient algorithms for our counting problem, by showing that the Fourier transform of the indicator function of the constraint is computable, for different constraints. Our second approach is to obtain good upper bounds, using an extension of Delsarte’s linear program (LP), on the largest sizes of constrained codes that can correct a fixed number of combinatorial errors or erasures. We observe that the numerical values of our LP-based upper bounds beat the generalized sphere packing bounds of Fazeli et al. (2015).","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"144-158"},"PeriodicalIF":0.0,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135180978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
IEEE journal on selected areas in information theory
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1