This paper addresses the average-tracking control problem for multi-agent systems subject to constant reference signals. By introducing auxiliary signals generated from the states and delayed states of agents, a novel privacy-preserving integral-type average-tracking algorithm is proposed. Leveraging the frequency-domain analysis approach, delay-dependent sufficient and necessary conditions for ensuring asymptotic average-tracking convergence are derived. Furthermore, the proposed algorithm is extended to tackle the average-tracking control problem with mismatched reference signals, and a corresponding delay-dependent sufficient condition is established to guarantee privacy-preserving average-tracking convergence. Numerical simulations are conducted to verify the effectiveness of the developed algorithms.
{"title":"Privacy-Preserving Average-Tracking Control for Multi-Agent Systems with Constant Reference Signals.","authors":"Wei Jiang, Cheng-Lin Liu","doi":"10.3390/e28010120","DOIUrl":"10.3390/e28010120","url":null,"abstract":"<p><p>This paper addresses the average-tracking control problem for multi-agent systems subject to constant reference signals. By introducing auxiliary signals generated from the states and delayed states of agents, a novel privacy-preserving integral-type average-tracking algorithm is proposed. Leveraging the frequency-domain analysis approach, delay-dependent sufficient and necessary conditions for ensuring asymptotic average-tracking convergence are derived. Furthermore, the proposed algorithm is extended to tackle the average-tracking control problem with mismatched reference signals, and a corresponding delay-dependent sufficient condition is established to guarantee privacy-preserving average-tracking convergence. Numerical simulations are conducted to verify the effectiveness of the developed algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arianna Issitt, Alex Merino, Lamine Deen, Ryan T White, Mackenzie J Meni
We study how convolutional neural networks reorganize information during learning in natural image classification tasks by tracking mutual information (MI) between inputs, intermediate representations, and labels. Across VGG-16, ResNet-18, and ResNet-50, we find that label-relevant MI grows reliably with depth while input MI depends strongly on architecture and activation, indicating that "compression'' is not a universal phenomenon. Within convolutional layers, label information becomes increasingly concentrated in a small subset of channels; inference-time knockouts, shuffles, and perturbations confirm that these high-MI channels are functionally necessary for accuracy. This behavior suggests a view of representation learning driven by selective concentration and decorrelation rather than global information reduction. Finally, we show that a simple dependence-aware regularizer based on the Hilbert-Schmidt Independence Criterion can encourage these same patterns during training, yielding small accuracy gains and consistently faster convergence.
{"title":"Uncovering Neural Learning Dynamics Through Latent Mutual Information.","authors":"Arianna Issitt, Alex Merino, Lamine Deen, Ryan T White, Mackenzie J Meni","doi":"10.3390/e28010118","DOIUrl":"10.3390/e28010118","url":null,"abstract":"<p><p>We study how convolutional neural networks reorganize information during learning in natural image classification tasks by tracking mutual information (MI) between inputs, intermediate representations, and labels. Across VGG-16, ResNet-18, and ResNet-50, we find that label-relevant MI grows reliably with depth while input MI depends strongly on architecture and activation, indicating that \"compression'' is not a universal phenomenon. Within convolutional layers, label information becomes increasingly concentrated in a small subset of channels; inference-time knockouts, shuffles, and perturbations confirm that these high-MI channels are functionally necessary for accuracy. This behavior suggests a view of representation learning driven by selective concentration and decorrelation rather than global information reduction. Finally, we show that a simple dependence-aware regularizer based on the Hilbert-Schmidt Independence Criterion can encourage these same patterns during training, yielding small accuracy gains and consistently faster convergence.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The systems in nature are rarely isolated and there are different influences that can perturb their states. Dynamic noise in physiological systems can cause fluctuations and changes on different levels, often leading to qualitative transitions. In this study, we explore how to detect and extract the physiological noise, in terms of dynamic noise, from measurements of biological oscillatory systems. Moreover, because the biological systems can often have deterministic time-varying dynamics, we have considered how to detect the dynamic physiological noise while at the same time following the time-variability of the deterministic part. To achieve this, we use dynamical Bayesian inference for modeling stochastic differential equations that describe the phase dynamics of interacting oscillators. We apply this methodological framework on cardio-respiratory signals in which the breathing of the subjects varies in a predefined manner, including free spontaneous, sine, ramped and aperiodic breathing patterns. The statistical results showed significant difference in the physiological noise for the respiration dynamics in relation to different breathing patterns. The effect from the perturbed breathing was not translated through the interactions on the dynamic noise of the cardiac dynamics. The fruitful cardio-respiratory application demonstrated the potential of the methodological framework for applications to other physiological systems more generally.
{"title":"Physiological Noise in Cardiorespiratory Time-Varying Interactions.","authors":"Dushko Lukarski, Dushko Stavrov, Tomislav Stankovski","doi":"10.3390/e28010121","DOIUrl":"10.3390/e28010121","url":null,"abstract":"<p><p>The systems in nature are rarely isolated and there are different influences that can perturb their states. Dynamic noise in physiological systems can cause fluctuations and changes on different levels, often leading to qualitative transitions. In this study, we explore how to detect and extract the physiological noise, in terms of dynamic noise, from measurements of biological oscillatory systems. Moreover, because the biological systems can often have deterministic time-varying dynamics, we have considered how to detect the dynamic physiological noise while at the same time following the time-variability of the deterministic part. To achieve this, we use dynamical Bayesian inference for modeling stochastic differential equations that describe the phase dynamics of interacting oscillators. We apply this methodological framework on cardio-respiratory signals in which the breathing of the subjects varies in a predefined manner, including free spontaneous, sine, ramped and aperiodic breathing patterns. The statistical results showed significant difference in the physiological noise for the respiration dynamics in relation to different breathing patterns. The effect from the perturbed breathing was not translated through the interactions on the dynamic noise of the cardiac dynamics. The fruitful cardio-respiratory application demonstrated the potential of the methodological framework for applications to other physiological systems more generally.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding regional disparities in Chinese modernization is essential for achieving coordinated and sustainable development. This study develops a multi-dimensional evaluation framework, integrating grey relational analysis, entropy weighting, and TOPSIS to assess provincial modernization across China from 2018 to 2023. The framework operationalizes Chinese-style modernization through five dimensions: population quality, economic strength, social development, ecological sustainability, innovation and governance, capturing both material and institutional aspects of development. Using K-Means clustering, kernel density estimation, and convergence analysis, the study examines spatial and temporal patterns of modernization. Results reveal pronounced regional heterogeneity: eastern provinces lead in overall modernization but display internal volatility, central provinces exhibit gradual convergence, and western provinces face widening disparities. Intra-regional analysis highlights uneven development even within geographic clusters, reflecting differential access to resources, governance capacity, and innovation infrastructure. These findings are interpreted through modernization theory, linking observed patterns to governance models, regional development trajectories, and policy coordination. The proposed framework offers a rigorous, data-driven tool for monitoring modernization progress, diagnosing regional bottlenecks, and informing targeted policy interventions. This study demonstrates the methodological value of integrating grey system theory with multi-criteria decision-making and clustering analysis, providing both theoretical insights and practical guidance for advancing balanced and sustainable Chinese-style modernization.
{"title":"Reassessing China's Regional Modernization Based on a Grey-Based Evaluation Framework and Spatial Disparity Analysis.","authors":"Wenhao Zhou, Hongxi Lin, Zhiwei Zhang, Siyu Lin","doi":"10.3390/e28010117","DOIUrl":"10.3390/e28010117","url":null,"abstract":"<p><p>Understanding regional disparities in Chinese modernization is essential for achieving coordinated and sustainable development. This study develops a multi-dimensional evaluation framework, integrating grey relational analysis, entropy weighting, and TOPSIS to assess provincial modernization across China from 2018 to 2023. The framework operationalizes Chinese-style modernization through five dimensions: population quality, economic strength, social development, ecological sustainability, innovation and governance, capturing both material and institutional aspects of development. Using K-Means clustering, kernel density estimation, and convergence analysis, the study examines spatial and temporal patterns of modernization. Results reveal pronounced regional heterogeneity: eastern provinces lead in overall modernization but display internal volatility, central provinces exhibit gradual convergence, and western provinces face widening disparities. Intra-regional analysis highlights uneven development even within geographic clusters, reflecting differential access to resources, governance capacity, and innovation infrastructure. These findings are interpreted through modernization theory, linking observed patterns to governance models, regional development trajectories, and policy coordination. The proposed framework offers a rigorous, data-driven tool for monitoring modernization progress, diagnosing regional bottlenecks, and informing targeted policy interventions. This study demonstrates the methodological value of integrating grey system theory with multi-criteria decision-making and clustering analysis, providing both theoretical insights and practical guidance for advancing balanced and sustainable Chinese-style modernization.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates robust high-dimensional convoluted rank regression in distributed environments. We propose an estimation method suitable for sparse regimes, which remains effective under heavy-tailed errors and outliers, as it does not impose moment assumptions on the noise distribution. To facilitate scalable computation, we develop a local linear approximation algorithm, enabling fast and stable optimization in high-dimensional settings and across distributed systems. Our theoretical results provide non-asymptotic error bounds for both one-round and multi-round communication schemes, explicitly quantifying how estimation accuracy improves with additional communication rounds. Specifically, after a number of communication rounds (logarithmic in the number of machines), the proposed estimator achieves the minimax-optimal convergence rate, up to logarithmic factors. Extensive simulations further demonstrate stable performance across a wide range of error distributions, with accurate coefficient estimation and reliable support recovery.
{"title":"Robust Distributed High-Dimensional Regression: A Convoluted Rank Approach.","authors":"Mingcong Wu","doi":"10.3390/e28010119","DOIUrl":"10.3390/e28010119","url":null,"abstract":"<p><p>This paper investigates robust high-dimensional convoluted rank regression in distributed environments. We propose an estimation method suitable for sparse regimes, which remains effective under heavy-tailed errors and outliers, as it does not impose moment assumptions on the noise distribution. To facilitate scalable computation, we develop a local linear approximation algorithm, enabling fast and stable optimization in high-dimensional settings and across distributed systems. Our theoretical results provide non-asymptotic error bounds for both one-round and multi-round communication schemes, explicitly quantifying how estimation accuracy improves with additional communication rounds. Specifically, after a number of communication rounds (logarithmic in the number of machines), the proposed estimator achieves the minimax-optimal convergence rate, up to logarithmic factors. Extensive simulations further demonstrate stable performance across a wide range of error distributions, with accurate coefficient estimation and reliable support recovery.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). Seven widely used normalization procedures are analyzed regarding mathematical properties, sensitivity to extreme values, treatment of benefit and cost criteria, and rank reversal. Normalization is treated as a source of uncertainty in MCDA outcomes, as different schemes can produce divergent rankings under identical decision settings. Shannon entropy is employed as a descriptive measure of information dispersion and structural uncertainty, capturing the heterogeneity and discriminatory potential of criteria rather than serving as a weighting mechanism. An illustrative experiment with ten alternatives and four criteria (two high-entropy, two low-entropy) demonstrates how entropy mediates normalization effects. Seven normalization schemes are examined, including vector, max, linear Sum, and max-min procedures. For vector, max, and linear sum, cost-type criteria are treated using either linear inversion or reciprocal transformation, whereas max-min is implemented as a single method. This design separates the choice of normalization form from the choice of cost-criteria transformation, allowing a cleaner identification of their respective contributions to ranking variability. The analysis shows that normalization choice alone can cause substantial differences in preference values and rankings. High-entropy criteria tend to yield stable rankings, whereas low-entropy criteria amplify sensitivity, especially with extreme or cost-type data. These findings position entropy as a key mediator linking data structure with normalization-induced ranking variability and highlight the need to consider entropy explicitly when selecting normalization procedures. Finally, a practical entropy-based method for choosing normalization techniques is introduced to enhance methodological transparency and ranking robustness in MCDA.
{"title":"Entropy and Normalization in MCDA: A Data-Driven Perspective on Ranking Stability.","authors":"Ewa Roszkowska","doi":"10.3390/e28010114","DOIUrl":"10.3390/e28010114","url":null,"abstract":"<p><p>Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). Seven widely used normalization procedures are analyzed regarding mathematical properties, sensitivity to extreme values, treatment of benefit and cost criteria, and rank reversal. Normalization is treated as a source of uncertainty in MCDA outcomes, as different schemes can produce divergent rankings under identical decision settings. Shannon entropy is employed as a descriptive measure of information dispersion and structural uncertainty, capturing the heterogeneity and discriminatory potential of criteria rather than serving as a weighting mechanism. An illustrative experiment with ten alternatives and four criteria (two high-entropy, two low-entropy) demonstrates how entropy mediates normalization effects. Seven normalization schemes are examined, including vector, max, linear Sum, and max-min procedures. For vector, max, and linear sum, cost-type criteria are treated using either linear inversion or reciprocal transformation, whereas max-min is implemented as a single method. This design separates the choice of normalization form from the choice of cost-criteria transformation, allowing a cleaner identification of their respective contributions to ranking variability. The analysis shows that normalization choice alone can cause substantial differences in preference values and rankings. High-entropy criteria tend to yield stable rankings, whereas low-entropy criteria amplify sensitivity, especially with extreme or cost-type data. These findings position entropy as a key mediator linking data structure with normalization-induced ranking variability and highlight the need to consider entropy explicitly when selecting normalization procedures. Finally, a practical entropy-based method for choosing normalization techniques is introduced to enhance methodological transparency and ranking robustness in MCDA.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance (VaDA), is established, where an "alliance" is formed across repeated measurements via the strength of Variational Auto-Encoder. VaDA models the generating process of longitudinal data with a unified and well-structured latent space, allowing outcomes prediction, subjects clustering and representation learning simultaneously. The integrated model can be inferred efficiently within a stochastic Auto-Encoding Variational Bayes framework, which is scalable to large datasets and can accommodate variables of mixed type. Quantitative comparisons to those baseline methods are considered. VaDA shows high robustness and generalization capability across various synthetic scenarios. Moreover, a longitudinal study based on the well-known CelebFaces Attributes dataset is carried out, where we show its usefulness in detecting meaningful latent clusters and generating high-quality face images.
{"title":"Variational Deep Alliance: A Generative Auto-Encoding Approach to Longitudinal Data Analysis.","authors":"Shan Feng, Wenxian Xie, Yufeng Nie","doi":"10.3390/e28010113","DOIUrl":"10.3390/e28010113","url":null,"abstract":"<p><p>Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance (VaDA), is established, where an \"alliance\" is formed across repeated measurements via the strength of Variational Auto-Encoder. VaDA models the generating process of longitudinal data with a unified and well-structured latent space, allowing outcomes prediction, subjects clustering and representation learning simultaneously. The integrated model can be inferred efficiently within a stochastic Auto-Encoding Variational Bayes framework, which is scalable to large datasets and can accommodate variables of mixed type. Quantitative comparisons to those baseline methods are considered. VaDA shows high robustness and generalization capability across various synthetic scenarios. Moreover, a longitudinal study based on the well-known CelebFaces Attributes dataset is carried out, where we show its usefulness in detecting meaningful latent clusters and generating high-quality face images.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Ego-Centric Sampling Method (ECM) leverages individual-level reports about peers to estimate population proportions within social networks, offering strong privacy protection without requiring full network data. However, the conventional ECM estimator is unbiased only under the restrictive assumption of a homogeneous network, where node degrees are uniform and uncorrelated with attributes. To overcome this limitation, we introduce the Activity Ratio Corrected ECM estimator (ECMac), which exploits network reciprocity to recast the population-proportion problem into an equivalent formulation in edge space. This reformulation relies solely on ego-peer data and explicitly corrects for degree-attribute dependencies, yielding unbiased and stable estimates even in highly heterogeneous networks. Simulations and analyses on real-world networks show that ECMac reduces estimation error by up to 70% compared with the conventional ECM. Our results establish a theoretically grounded and practically scalable framework for unbiased inference in network-based sampling designs.
{"title":"Peer Reporting: Sampling Design and Unbiased Estimates.","authors":"Kang Wen, Jianhong Mou, Xin Lu","doi":"10.3390/e28010116","DOIUrl":"10.3390/e28010116","url":null,"abstract":"<p><p>The Ego-Centric Sampling Method (ECM) leverages individual-level reports about peers to estimate population proportions within social networks, offering strong privacy protection without requiring full network data. However, the conventional ECM estimator is unbiased only under the restrictive assumption of a homogeneous network, where node degrees are uniform and uncorrelated with attributes. To overcome this limitation, we introduce the Activity Ratio Corrected ECM estimator (ECMac), which exploits network reciprocity to recast the population-proportion problem into an equivalent formulation in edge space. This reformulation relies solely on ego-peer data and explicitly corrects for degree-attribute dependencies, yielding unbiased and stable estimates even in highly heterogeneous networks. Simulations and analyses on real-world networks show that ECMac reduces estimation error by up to 70% compared with the conventional ECM. Our results establish a theoretically grounded and practically scalable framework for unbiased inference in network-based sampling designs.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrated Sensing and Communication (ISAC) has emerged as a cornerstone technology for next-generation wireless networks, where accurate performance evaluation is essential. In such systems, the capacity-distortion function provides a fundamental measure of the trade-off between communication and sensing performance, making its computation a problem of significant interest. However, the associated optimization problem is often constrained by non-convexity, which poses considerable challenges for deriving effective solutions. In this paper, we propose extended Arimoto-Blahut (AB) algorithms to solve the non-convex optimization problem associated with the capacity-distortion trade-off in bistatic ISAC systems. Specifically, we introduce auxiliary variables to transform non-convex distortion constraints in the optimization problem into linear constraints, prove that the reformulated linearly constrained optimization problem maintains the same optimal solution as the original problem, and develop extended AB algorithms for both squared error distortion and logarithmic loss distortion. The numerical results validate the effectiveness of the proposed algorithms.
{"title":"Extended Arimoto-Blahut Algorithms for Bistatic Integrated Sensing and Communications Systems.","authors":"Tian Jiao, Yanlin Geng, Zhiqiang Wei, Zai Yang","doi":"10.3390/e28010115","DOIUrl":"10.3390/e28010115","url":null,"abstract":"<p><p>Integrated Sensing and Communication (ISAC) has emerged as a cornerstone technology for next-generation wireless networks, where accurate performance evaluation is essential. In such systems, the capacity-distortion function provides a fundamental measure of the trade-off between communication and sensing performance, making its computation a problem of significant interest. However, the associated optimization problem is often constrained by non-convexity, which poses considerable challenges for deriving effective solutions. In this paper, we propose extended Arimoto-Blahut (AB) algorithms to solve the non-convex optimization problem associated with the capacity-distortion trade-off in bistatic ISAC systems. Specifically, we introduce auxiliary variables to transform non-convex distortion constraints in the optimization problem into linear constraints, prove that the reformulated linearly constrained optimization problem maintains the same optimal solution as the original problem, and develop extended AB algorithms for both squared error distortion and logarithmic loss distortion. The numerical results validate the effectiveness of the proposed algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca De Paolis, Elisabetta Pace, Chiara Maria Mazzanti, Mariangela Morelli, Francesca Di Lorenzo, Lucio Tonello, Catalina Curceanu, Alberto Clozza, Maurizio Grandi, Ivan Davoli, Angelo Gemignani, Paolo Grigolini, Maurizio Benfatto
Biophotons are non-thermal and non-bioluminescent ultraweak photon emissions, first hypothesised by Gurwitsch as a regulatory mechanism in cell division, and then experimentally observed in living organisms. Today, two main hypotheses explain their origin: stochastic decay of excited molecules and coherent electromagnetic fields produced in biochemical processes. Recent interest focuses on the role of biophotons in cellular communication and disease monitoring. This study presents the first campaign of biophoton emission measurements from cultured astrocytes and glioblastoma cells, conducted at Fondazione Pisana per la Scienza (FPS) using two ultra-sensitive setups developed in collaboration between the National Laboratories of Frascati (LNF-INFN) and the University of Rome II Tor Vergata. The statistical analyses of the collected data revealed a clear separation between cellular signals and dark noise, confirming the high sensitivity of the apparatus. The Diffusion Entropy Analysis (DEA) was applied to the data to uncover dynamic patterns, revealing anomalous diffusion and long-range memory effects that may be related to intercellular signaling and cellular communication. These findings support the hypothesis that biophoton emissions encode rich information beyond intensity, reflecting metabolic and pathological states. The differences revealed by applying the Diffusion Entropy Analysis to the biophotonic signals of Astrocytes and Glioblastoma are highlighted and discussed in the paper. This work lays the groundwork for future studies on neuronal cultures and proposes biophoton dynamics as a promising tool for non-invasive diagnostics and the study of cellular communication.
生物光子是一种非热和非生物发光的超弱光子发射,Gurwitsch首先将其假设为细胞分裂的调节机制,然后在生物体中进行了实验观察。今天,两种主要的假设解释了它们的起源:受激分子的随机衰变和生化过程中产生的相干电磁场。最近的兴趣集中在生物光子在细胞通讯和疾病监测中的作用。本研究首次对培养的星形胶质细胞和胶质母细胞瘤细胞进行了生物光子发射测量,该研究是在Pisana per la Scienza基金会(FPS)进行的,使用了由Frascati国家实验室(LNF-INFN)和罗马第二大学(University of Rome II to Vergata)合作开发的两个超灵敏装置。对收集数据的统计分析显示,蜂窝信号和暗噪声之间存在明显的分离,证实了该装置的高灵敏度。应用扩散熵分析(Diffusion Entropy Analysis, DEA)对数据进行动态分析,揭示可能与细胞间信号和细胞通信有关的异常扩散和远程记忆效应。这些发现支持了一种假设,即生物光子发射编码了丰富的信息,而不仅仅是强度,反映了代谢和病理状态。本文着重讨论了星形胶质细胞和胶质母细胞瘤生物光子信号的扩散熵分析所揭示的差异。这项工作为神经元培养的未来研究奠定了基础,并提出生物光子动力学作为一种有前途的非侵入性诊断和细胞通讯研究工具。
{"title":"First Experimental Measurements of Biophotons from Astrocytes and Glioblastoma Cell Cultures.","authors":"Luca De Paolis, Elisabetta Pace, Chiara Maria Mazzanti, Mariangela Morelli, Francesca Di Lorenzo, Lucio Tonello, Catalina Curceanu, Alberto Clozza, Maurizio Grandi, Ivan Davoli, Angelo Gemignani, Paolo Grigolini, Maurizio Benfatto","doi":"10.3390/e28010112","DOIUrl":"10.3390/e28010112","url":null,"abstract":"<p><p>Biophotons are non-thermal and non-bioluminescent ultraweak photon emissions, first hypothesised by Gurwitsch as a regulatory mechanism in cell division, and then experimentally observed in living organisms. Today, two main hypotheses explain their origin: stochastic decay of excited molecules and coherent electromagnetic fields produced in biochemical processes. Recent interest focuses on the role of biophotons in cellular communication and disease monitoring. This study presents the first campaign of biophoton emission measurements from cultured astrocytes and glioblastoma cells, conducted at Fondazione Pisana per la Scienza (FPS) using two ultra-sensitive setups developed in collaboration between the National Laboratories of Frascati (LNF-INFN) and the University of Rome II Tor Vergata. The statistical analyses of the collected data revealed a clear separation between cellular signals and dark noise, confirming the high sensitivity of the apparatus. The Diffusion Entropy Analysis (DEA) was applied to the data to uncover dynamic patterns, revealing anomalous diffusion and long-range memory effects that may be related to intercellular signaling and cellular communication. These findings support the hypothesis that biophoton emissions encode rich information beyond intensity, reflecting metabolic and pathological states. The differences revealed by applying the Diffusion Entropy Analysis to the biophotonic signals of Astrocytes and Glioblastoma are highlighted and discussed in the paper. This work lays the groundwork for future studies on neuronal cultures and proposes biophoton dynamics as a promising tool for non-invasive diagnostics and the study of cellular communication.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}