Pub Date : 2025-12-23DOI: 10.1109/TIT.2025.3643223
{"title":"IEEE Transactions on Information Theory Information for Authors","authors":"","doi":"10.1109/TIT.2025.3643223","DOIUrl":"https://doi.org/10.1109/TIT.2025.3643223","url":null,"abstract":"","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"C3-C3"},"PeriodicalIF":2.9,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/TIT.2025.3643249
{"title":"TechRxiv: Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIT.2025.3643249","DOIUrl":"https://doi.org/10.1109/TIT.2025.3643249","url":null,"abstract":"","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"810-810"},"PeriodicalIF":2.9,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313722","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/TIT.2025.3641166
Aytijhya Saha;Aaditya Ramdas
We propose an e-value based framework for testing arbitrary composite nulls against composite alternatives, when an $epsilon $ fraction of the data can be arbitrarily corrupted. Our tests are inherently sequential, being valid at arbitrary data-dependent stopping times, but they are new even for fixed sample sizes, giving type-I error control without any regularity conditions. We first prove that least favourable distribution (LFD) pairs, when they exist, yield optimal e-values for testing arbitrary composite nulls against composite alternatives. Then we show that if an LFD pair exists for some composite null and alternative, then the LFDs of Huber’s $epsilon $ -contamination or total variation (TV) neighborhoods around that specific pair form the optimal LFD pair for the corresponding robustified composite hypotheses. Furthermore, where LFDs do not exist, we develop new robust composite tests for general settings. Our test statistics are a nonnegative supermartingale under the (robust) null, even under a sequentially adaptive (non-i.i.d.) contamination model where the conditional distribution of each observation given the past data lies within an $epsilon $ TV ball of some distribution in the original composite null. When LFDs exist, our supermartingale grows to infinity exponentially fast under any distribution in the ($epsilon $ TV-corruption of the) alternative at the optimal rate. When LFDs do not exist, we provide an asymptotic growth rate analysis, showing that as $epsilon to 0$ , the exponent converges to the corresponding Kullback–Leibler divergence, recovering the classical optimal non-robust rate. Simulations validate the theory and demonstrate reasonable practical performance.
{"title":"Huber-Robust Likelihood Ratio Tests for Composite Nulls and Alternatives","authors":"Aytijhya Saha;Aaditya Ramdas","doi":"10.1109/TIT.2025.3641166","DOIUrl":"https://doi.org/10.1109/TIT.2025.3641166","url":null,"abstract":"We propose an e-value based framework for testing arbitrary composite nulls against composite alternatives, when an <inline-formula> <tex-math>$epsilon $ </tex-math></inline-formula> fraction of the data can be arbitrarily corrupted. Our tests are inherently sequential, being valid at arbitrary data-dependent stopping times, but they are new even for fixed sample sizes, giving type-I error control without any regularity conditions. We first prove that least favourable distribution (LFD) pairs, when they exist, yield optimal e-values for testing arbitrary composite nulls against composite alternatives. Then we show that if an LFD pair exists for some composite null and alternative, then the LFDs of Huber’s <inline-formula> <tex-math>$epsilon $ </tex-math></inline-formula>-contamination or total variation (TV) neighborhoods around that specific pair form the optimal LFD pair for the corresponding robustified composite hypotheses. Furthermore, where LFDs do not exist, we develop new robust composite tests for general settings. Our test statistics are a nonnegative supermartingale under the (robust) null, even under a sequentially adaptive (non-i.i.d.) contamination model where the conditional distribution of each observation given the past data lies within an <inline-formula> <tex-math>$epsilon $ </tex-math></inline-formula> TV ball of some distribution in the original composite null. When LFDs exist, our supermartingale grows to infinity exponentially fast under any distribution in the (<inline-formula> <tex-math>$epsilon $ </tex-math></inline-formula> TV-corruption of the) alternative at the optimal rate. When LFDs do not exist, we provide an asymptotic growth rate analysis, showing that as <inline-formula> <tex-math>$epsilon to 0$ </tex-math></inline-formula>, the exponent converges to the corresponding Kullback–Leibler divergence, recovering the classical optimal non-robust rate. Simulations validate the theory and demonstrate reasonable practical performance.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"501-520"},"PeriodicalIF":2.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1109/TIT.2025.3637800
Tingting Tong;Sihuang Hu
The error coefficient of a linear code, defined as the number of its minimum weight codewords, is a key performance metric to evaluate codes with a given length, dimension, and minimum distance. In this paper, we propose novel approaches, different from existing methods, to produce three new families of binary optimal linear codes with the smallest possible error coefficients. These codes are known as asymptotic frame error rate (AFER)-optimal codes, achieving the best known performance in the additive white Gaussian noise channel and under maximum-likelihood decoding. In particular, we solve a conjecture originally proposed by Li et al. (2025).
{"title":"Three New Families of Binary AFER-Optimal Linear Codes","authors":"Tingting Tong;Sihuang Hu","doi":"10.1109/TIT.2025.3637800","DOIUrl":"https://doi.org/10.1109/TIT.2025.3637800","url":null,"abstract":"The error coefficient of a linear code, defined as the number of its minimum weight codewords, is a key performance metric to evaluate codes with a given length, dimension, and minimum distance. In this paper, we propose novel approaches, different from existing methods, to produce three new families of binary optimal linear codes with the smallest possible error coefficients. These codes are known as asymptotic frame error rate (AFER)-optimal codes, achieving the best known performance in the additive white Gaussian noise channel and under maximum-likelihood decoding. In particular, we solve a conjecture originally proposed by Li et al. (2025).","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"374-382"},"PeriodicalIF":2.9,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/TIT.2025.3639289
Han Fang;Nan Liu;Wei Kang
We consider the secure coded caching problem proposed by Ravindrakumar et al. where no user can obtain information about files other than the one requested. We first propose three new schemes for 1) the general case with arbitrary N files and K users; 2) cache size $ M=1$ , $N=2$ files and arbitrary K users; and 3) worst-case delivery rate $ R=1 $ , arbitrary N files and K users, respectively. Then we derive some new converse results for 1) the general case with arbitrary N files and K users; 2) cache size $M=1$ with arbitrary N files and K users; 3) worst-case delivery rate $R=1$ with arbitrary N files and K users; and 4) cache size $M in left [{{1, frac {K}{K-1}}}right]$ with $N=2$ files and arbitrary K users. As a result, we obtain 1) the two exact end-points for the optimal memory-rate tradeoff curve for arbitrary number of users and files; 2) a segment of the optimal memory-rate tradeoff curve, where $M in left [{{1, frac {K}{K-1}}}right]$ , for the case of $N=2$ files and arbitrary number of users; and 3) a multiplicative-gap-10 result, i.e., we show that the proposed achievable schemes achieve a ratio less than 10 with respect to the cut-set bound.
我们考虑Ravindrakumar等人提出的安全编码缓存问题,其中用户无法获取除请求文件以外的文件信息。我们首先针对1)任意N个文件和K个用户的一般情况提出了三种新方案;2)缓存大小$ M=1$、$N=2$文件和任意K用户;最差投递率$ R=1 $,任意N个文件和K个用户。然后,我们为1)任意N个文件和K个用户的一般情况导出了一些新的相反结果;2)任意N个文件和K个用户的缓存大小$M=1$;3)任意N个文件、K个用户的最坏投递率$R=1$;4)缓存大小$M in left [{{1, frac {K}{K-1}}}right]$与$N=2$文件和任意K用户。结果,我们得到1)任意数量的用户和文件的最优内存率权衡曲线的两个确切的端点;2)一段最优内存率权衡曲线,其中$M in left [{{1, frac {K}{K-1}}}right]$,对于$N=2$文件和任意数量的用户;3)一个乘法-gap-10的结果,即我们证明了所提出的可实现方案相对于切集界的比率小于10。
{"title":"Secure Coded Caching: Exact End-Points and Tighter Bounds","authors":"Han Fang;Nan Liu;Wei Kang","doi":"10.1109/TIT.2025.3639289","DOIUrl":"https://doi.org/10.1109/TIT.2025.3639289","url":null,"abstract":"We consider the secure coded caching problem proposed by Ravindrakumar et al. where no user can obtain information about files other than the one requested. We first propose three new schemes for 1) the general case with arbitrary <italic>N</i> files and <italic>K</i> users; 2) cache size <inline-formula> <tex-math>$ M=1$ </tex-math></inline-formula>, <inline-formula> <tex-math>$N=2$ </tex-math></inline-formula> files and arbitrary <italic>K</i> users; and 3) worst-case delivery rate <inline-formula> <tex-math>$ R=1 $ </tex-math></inline-formula>, arbitrary <italic>N</i> files and <italic>K</i> users, respectively. Then we derive some new converse results for 1) the general case with arbitrary <italic>N</i> files and <italic>K</i> users; 2) cache size <inline-formula> <tex-math>$M=1$ </tex-math></inline-formula> with arbitrary <italic>N</i> files and <italic>K</i> users; 3) worst-case delivery rate <inline-formula> <tex-math>$R=1$ </tex-math></inline-formula> with arbitrary <italic>N</i> files and <italic>K</i> users; and 4) cache size <inline-formula> <tex-math>$M in left [{{1, frac {K}{K-1}}}right]$ </tex-math></inline-formula> with <inline-formula> <tex-math>$N=2$ </tex-math></inline-formula> files and arbitrary <italic>K</i> users. As a result, we obtain 1) the two exact end-points for the optimal memory-rate tradeoff curve for arbitrary number of users and files; 2) a segment of the optimal memory-rate tradeoff curve, where <inline-formula> <tex-math>$M in left [{{1, frac {K}{K-1}}}right]$ </tex-math></inline-formula>, for the case of <inline-formula> <tex-math>$N=2$ </tex-math></inline-formula> files and arbitrary number of users; and 3) a multiplicative-gap-10 result, i.e., we show that the proposed achievable schemes achieve a ratio less than 10 with respect to the cut-set bound.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"636-663"},"PeriodicalIF":2.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1109/TIT.2025.3639191
Yuefeng Han;Likai Chen;Wei Biao Wu
High-dimensional vector autoregressive (VAR) models have numerous applications in fields such as econometrics, biology, climatology, among others. While prior research has mainly focused on linear VAR models, these approaches can be restrictive in practice. To address this, we introduce a high-dimensional non-parametric sparse additive model, providing a more flexible framework. Our method employs basis expansions to construct high-dimensional nonlinear VAR models. We derive convergence rates and model selection consistency for least squared estimators, considering dependence measures of the processes, error moment conditions, sparsity, and basis expansions. Our theory significantly extends prior linear VAR models by incorporating both non-Gaussianity and non-linearity. As a key contribution, we derive sharp Bernstein-type inequalities for tail probabilities in both non-sub-Gaussian linear and nonlinear VAR processes, which match the classical Bernstein inequality for independent random variables. Additionally, we present numerical experiments that support our theoretical findings and demonstrate the advantages of the nonlinear VAR model for a gene expression time series dataset.
{"title":"Estimation of High-Dimensional Nonlinear Vector Autoregressive Models","authors":"Yuefeng Han;Likai Chen;Wei Biao Wu","doi":"10.1109/TIT.2025.3639191","DOIUrl":"https://doi.org/10.1109/TIT.2025.3639191","url":null,"abstract":"High-dimensional vector autoregressive (VAR) models have numerous applications in fields such as econometrics, biology, climatology, among others. While prior research has mainly focused on linear VAR models, these approaches can be restrictive in practice. To address this, we introduce a high-dimensional non-parametric sparse additive model, providing a more flexible framework. Our method employs basis expansions to construct high-dimensional nonlinear VAR models. We derive convergence rates and model selection consistency for least squared estimators, considering dependence measures of the processes, error moment conditions, sparsity, and basis expansions. Our theory significantly extends prior linear VAR models by incorporating both non-Gaussianity and non-linearity. As a key contribution, we derive sharp Bernstein-type inequalities for tail probabilities in both non-sub-Gaussian linear and nonlinear VAR processes, which match the classical Bernstein inequality for independent random variables. Additionally, we present numerical experiments that support our theoretical findings and demonstrate the advantages of the nonlinear VAR model for a gene expression time series dataset.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"521-541"},"PeriodicalIF":2.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An analytical framework integrating performance characterization and coding theory is proposed to mitigate sneak path (SP) interference in resistive random-access memory (ReRAM) crossbar arrays. The core innovation is identified in the mathematical decomposition of ReRAM’s non-ergodic data-dependent channel into multiple stationary memoryless subchannels. Through information-theoretic analysis, an approximate finite-length characterization of the theoretical lower bound for decoding word error probability (WEP) is established. This is achieved by systematically analyzing the SP occurrence rate in constrained array geometries combined with comprehensive evaluation of both mutual information and dispersion metrics across the decomposed channel components. Building upon this decomposition paradigm, a systematic code construction methodology is developed using density evolution principles for sparse-graph code design. The designed codes not only exhibit capacity-approaching decoding thresholds but also yield word error rate simulation results that are close to the derived WEP bound under practical crossbar configurations.
{"title":"Performance Analysis and Code Design for Resistive Random-Access Memory Using Channel Decomposition Approach","authors":"Guanghui Song;Meiru Gao;Ying Li;Bin Dai;Kui Cai;Lin Zhou","doi":"10.1109/TIT.2025.3638148","DOIUrl":"https://doi.org/10.1109/TIT.2025.3638148","url":null,"abstract":"An analytical framework integrating performance characterization and coding theory is proposed to mitigate sneak path (SP) interference in resistive random-access memory (ReRAM) crossbar arrays. The core innovation is identified in the mathematical decomposition of ReRAM’s non-ergodic data-dependent channel into multiple stationary memoryless subchannels. Through information-theoretic analysis, an approximate finite-length characterization of the theoretical lower bound for decoding word error probability (WEP) is established. This is achieved by systematically analyzing the SP occurrence rate in constrained array geometries combined with comprehensive evaluation of both mutual information and dispersion metrics across the decomposed channel components. Building upon this decomposition paradigm, a systematic code construction methodology is developed using density evolution principles for sparse-graph code design. The designed codes not only exhibit capacity-approaching decoding thresholds but also yield word error rate simulation results that are close to the derived WEP bound under practical crossbar configurations.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"358-373"},"PeriodicalIF":2.9,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1109/TIT.2025.3637364
Sara Saeidian;Leonhard Grosse;Parastoo Sadeghi;Mikael Skoglund;Tobias J. Oechtering
This paper explores the implications of guaranteeing privacy by imposing a lower bound on the information density between the private and the public data. We introduce a novel and operationally meaningful privacy measure called pointwise maximal cost (PMC) and demonstrate that imposing an upper bound on PMC is equivalent to enforcing a lower bound on the information density. PMC quantifies the information leakage about a secret to adversaries who aim to minimize non-negative cost functions after observing the outcome of a privacy mechanism. When restricted to finite alphabets, PMC can equivalently be defined as the information leakage to adversaries aiming to minimize the probability of incorrectly guessing randomized functions of the secret. We study the properties of PMC and apply it to standard privacy mechanisms to demonstrate its practical relevance. Through a detailed examination, we connect PMC with other privacy measures that impose upper or lower bounds on the information density. These are pointwise maximal leakage (PML), local differential privacy (LDP), and (asymmetric) local information privacy. In particular, we show that a mechanism satisfies LDP if and only if it has both bounded PMC and bounded PML. Overall, our work fills a conceptual and operational gap in the taxonomy of privacy measures, bridges existing disconnects between different frameworks, and offers insights for selecting a suitable notion of privacy in a given application.
{"title":"Information Density Bounds for Privacy","authors":"Sara Saeidian;Leonhard Grosse;Parastoo Sadeghi;Mikael Skoglund;Tobias J. Oechtering","doi":"10.1109/TIT.2025.3637364","DOIUrl":"https://doi.org/10.1109/TIT.2025.3637364","url":null,"abstract":"This paper explores the implications of guaranteeing privacy by imposing a lower bound on the information density between the private and the public data. We introduce a novel and operationally meaningful privacy measure called <italic>pointwise maximal cost</i> (PMC) and demonstrate that imposing an upper bound on PMC is equivalent to enforcing a lower bound on the information density. PMC quantifies the information leakage about a secret to adversaries who aim to minimize non-negative cost functions after observing the outcome of a privacy mechanism. When restricted to finite alphabets, PMC can equivalently be defined as the information leakage to adversaries aiming to minimize the probability of incorrectly guessing randomized functions of the secret. We study the properties of PMC and apply it to standard privacy mechanisms to demonstrate its practical relevance. Through a detailed examination, we connect PMC with other privacy measures that impose upper or lower bounds on the information density. These are pointwise maximal leakage (PML), local differential privacy (LDP), and (asymmetric) local information privacy. In particular, we show that a mechanism satisfies LDP if and only if it has both bounded PMC and bounded PML. Overall, our work fills a conceptual and operational gap in the taxonomy of privacy measures, bridges existing disconnects between different frameworks, and offers insights for selecting a suitable notion of privacy in a given application.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"72 1","pages":"610-635"},"PeriodicalIF":2.9,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11269864","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TIT.2025.3632187
{"title":"IEEE Transactions on Information Theory Information for Authors","authors":"","doi":"10.1109/TIT.2025.3632187","DOIUrl":"https://doi.org/10.1109/TIT.2025.3632187","url":null,"abstract":"","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 12","pages":"C3-C3"},"PeriodicalIF":2.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11268980","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TIT.2025.3614420
Shubhanshu Shekhar;Aaditya Ramdas
Lemma 2 of Shekhar and Ramdas (2024), which was used to derive the upper bound on the expected stopping time stated in (12), contains an error. In this note, we fix this error and provide the correct justification of (12), whose expression remains unchanged up to small constants.
{"title":"Corrections to “Nonparametric Two-Sample Testing by Betting”","authors":"Shubhanshu Shekhar;Aaditya Ramdas","doi":"10.1109/TIT.2025.3614420","DOIUrl":"https://doi.org/10.1109/TIT.2025.3614420","url":null,"abstract":"<xref>Lemma 2</xref> of Shekhar and Ramdas (2024), which was used to derive the upper bound on the expected stopping time stated in <xref>(12)</xref>, contains an error. In this note, we fix this error and provide the correct justification of <xref>(12)</xref>, whose expression remains unchanged up to small constants.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 12","pages":"9804-9806"},"PeriodicalIF":2.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11268981","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}