This paper establishes a general framework for measuring statistical divergence. Namely, with regard to a pair of random variables that share a common range of values: quantifying the distance of the statistical distribution of one random variable from that of the other. The general framework is then applied to the topics of socioeconomic inequality and renewal processes. The general framework and its applications are shown to yield and to relate to the following: f-divergence, Hellinger divergence, Renyi divergence, and Kullback–Leibler divergence (also known as relative entropy); the Lorenz curve and socioeconomic inequality indices; the Gini index and its generalizations; the divergence of renewal processes from the Poisson process; and the divergence of anomalous relaxation from regular relaxation. Presenting a `fresh’ perspective on statistical divergence, this paper offers its readers a simple and transparent construction of statistical-divergence gauges, as well as novel paths that lead from statistical divergence to the aforementioned topics.
{"title":"Statistical Divergence and Paths Thereof to Socioeconomic Inequality and to Renewal Processes","authors":"Iddo Eliazar","doi":"10.3390/e26070565","DOIUrl":"https://doi.org/10.3390/e26070565","url":null,"abstract":"This paper establishes a general framework for measuring statistical divergence. Namely, with regard to a pair of random variables that share a common range of values: quantifying the distance of the statistical distribution of one random variable from that of the other. The general framework is then applied to the topics of socioeconomic inequality and renewal processes. The general framework and its applications are shown to yield and to relate to the following: f-divergence, Hellinger divergence, Renyi divergence, and Kullback–Leibler divergence (also known as relative entropy); the Lorenz curve and socioeconomic inequality indices; the Gini index and its generalizations; the divergence of renewal processes from the Poisson process; and the divergence of anomalous relaxation from regular relaxation. Presenting a `fresh’ perspective on statistical divergence, this paper offers its readers a simple and transparent construction of statistical-divergence gauges, as well as novel paths that lead from statistical divergence to the aforementioned topics.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural networks have been extensively applied to a variety of tasks, achieving astounding results. Applying neural networks in the scientific field is an important research direction that is gaining increasing attention. In scientific applications, the scale of neural networks is generally moderate size, mainly to ensure the speed of inference during application. Additionally, comparing neural networks to traditional algorithms in scientific applications is inevitable. These applications often require rapid computations, making the reduction in neural network sizes increasingly important. Existing work has found that the powerful capabilities of neural networks are primarily due to their nonlinearity. Theoretical work has discovered that under strong nonlinearity, neurons in the same layer tend to behave similarly, a phenomenon known as condensation. Condensation offers an opportunity to reduce the scale of neural networks to a smaller subnetwork with a similar performance. In this article, we propose a condensation reduction method to verify the feasibility of this idea in practical problems, thereby validating existing theories. Our reduction method can currently be applied to both fully connected networks and convolutional networks, achieving positive results. In complex combustion acceleration tasks, we reduced the size of the neural network to 41.7% of its original scale while maintaining prediction accuracy. In the CIFAR10 image classification task, we reduced the network size to 11.5% of the original scale, still maintaining a satisfactory validation accuracy. Our method can be applied to most trained neural networks, reducing computational pressure and improving inference speed.
{"title":"Efficient and Flexible Method for Reducing Moderate-Size Deep Neural Networks with Condensation","authors":"Tianyi Chen, Zhi-Qin John Xu","doi":"10.3390/e26070567","DOIUrl":"https://doi.org/10.3390/e26070567","url":null,"abstract":"Neural networks have been extensively applied to a variety of tasks, achieving astounding results. Applying neural networks in the scientific field is an important research direction that is gaining increasing attention. In scientific applications, the scale of neural networks is generally moderate size, mainly to ensure the speed of inference during application. Additionally, comparing neural networks to traditional algorithms in scientific applications is inevitable. These applications often require rapid computations, making the reduction in neural network sizes increasingly important. Existing work has found that the powerful capabilities of neural networks are primarily due to their nonlinearity. Theoretical work has discovered that under strong nonlinearity, neurons in the same layer tend to behave similarly, a phenomenon known as condensation. Condensation offers an opportunity to reduce the scale of neural networks to a smaller subnetwork with a similar performance. In this article, we propose a condensation reduction method to verify the feasibility of this idea in practical problems, thereby validating existing theories. Our reduction method can currently be applied to both fully connected networks and convolutional networks, achieving positive results. In complex combustion acceleration tasks, we reduced the size of the neural network to 41.7% of its original scale while maintaining prediction accuracy. In the CIFAR10 image classification task, we reduced the network size to 11.5% of the original scale, still maintaining a satisfactory validation accuracy. Our method can be applied to most trained neural networks, reducing computational pressure and improving inference speed.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract: In the debate about the direction of time in physics, the concept of time reversal has been central. Tradition has it that time-reversal invariant laws are sufficient to state that the direction of time is non-fundamental or emergent. In this paper, we review some of the debates that have gravitated around the concept of time reversal and its relation to the direction of time. We also clarify some of the central concepts involved, showing that the very concept of time reversal is more complex than frequently thought.
{"title":"A Review of the Concept of Time Reversal and the Direction of Time","authors":"Cristian López, Olimpia Lombardi","doi":"10.3390/e26070563","DOIUrl":"https://doi.org/10.3390/e26070563","url":null,"abstract":"Abstract: In the debate about the direction of time in physics, the concept of time reversal has been central. Tradition has it that time-reversal invariant laws are sufficient to state that the direction of time is non-fundamental or emergent. In this paper, we review some of the debates that have gravitated around the concept of time reversal and its relation to the direction of time. We also clarify some of the central concepts involved, showing that the very concept of time reversal is more complex than frequently thought.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The metrological limits of thermometry operated in nonequilibrium dynamical regimes are analyzed. We consider a finite-dimensional quantum system, employed as a quantum thermometer, in contact with a thermal bath inducing Markovian thermalization dynamics. The quantum thermometer is initialized in a generic quantum state, possibly including quantum coherence with respect to the Hamiltonian basis. We prove that the precision of the thermometer, quantified by the Quantum Fisher Information, is enhanced by the quantum coherence in its initial state. We analytically show this in the specific case of qubit thermometers for which the maximization of the Quantum Fisher Information occurs at a finite time during the transient thermalization dynamics. Such a finite-time precision enhancement can be better than the precision that is achieved asymptotically.
{"title":"Coherence-Enhanced Single-Qubit Thermometry out of Equilibrium","authors":"Gonçalo Frazão, Marco Pezzutto, Yasser Omar, Emmanuel Zambrini Cruzeiro, Stefano Gherardini","doi":"10.3390/e26070568","DOIUrl":"https://doi.org/10.3390/e26070568","url":null,"abstract":"The metrological limits of thermometry operated in nonequilibrium dynamical regimes are analyzed. We consider a finite-dimensional quantum system, employed as a quantum thermometer, in contact with a thermal bath inducing Markovian thermalization dynamics. The quantum thermometer is initialized in a generic quantum state, possibly including quantum coherence with respect to the Hamiltonian basis. We prove that the precision of the thermometer, quantified by the Quantum Fisher Information, is enhanced by the quantum coherence in its initial state. We analytically show this in the specific case of qubit thermometers for which the maximization of the Quantum Fisher Information occurs at a finite time during the transient thermalization dynamics. Such a finite-time precision enhancement can be better than the precision that is achieved asymptotically.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zinuoqi Wang, Guofeng Zhang, Xiaojing Ma, Ruixian Wang
Investigating the significant “roles” within financial complex networks and their stability is of great importance for preventing financial risks. On one hand, this paper initially constructs a complex network model of the stock market based on mutual information theory and threshold methods, combined with the closing price returns of stocks. It then analyzes the basic topological characteristics of this network and examines its stability under random and targeted attacks by varying the threshold values. On the other hand, using systemic risk entropy as a metric to quantify the stability of the stock market, this paper validates the impact of the COVID-19 pandemic as a widespread, unexpected event on network stability. The research results indicate that this complex network exhibits small-world characteristics but cannot be strictly classified as a scale-free network. In this network, key roles are played by the industrial sector, media and information services, pharmaceuticals and healthcare, transportation, and utilities. Upon reducing the threshold, the network’s resilience to random attacks is correspondingly strengthened. Dynamically, from 2000 to 2022, systemic risk in significant industrial share markets significantly increased. From a static perspective, the period around 2019, affected by the COVID-19 pandemic, experienced the most drastic fluctuations. Compared to the year 2000, systemic risk entropy in 2022 increased nearly sixtyfold, further indicating an increasing instability within this complex network.
{"title":"Study on the Stability of Complex Networks in the Stock Markets of Key Industries in China","authors":"Zinuoqi Wang, Guofeng Zhang, Xiaojing Ma, Ruixian Wang","doi":"10.3390/e26070569","DOIUrl":"https://doi.org/10.3390/e26070569","url":null,"abstract":"Investigating the significant “roles” within financial complex networks and their stability is of great importance for preventing financial risks. On one hand, this paper initially constructs a complex network model of the stock market based on mutual information theory and threshold methods, combined with the closing price returns of stocks. It then analyzes the basic topological characteristics of this network and examines its stability under random and targeted attacks by varying the threshold values. On the other hand, using systemic risk entropy as a metric to quantify the stability of the stock market, this paper validates the impact of the COVID-19 pandemic as a widespread, unexpected event on network stability. The research results indicate that this complex network exhibits small-world characteristics but cannot be strictly classified as a scale-free network. In this network, key roles are played by the industrial sector, media and information services, pharmaceuticals and healthcare, transportation, and utilities. Upon reducing the threshold, the network’s resilience to random attacks is correspondingly strengthened. Dynamically, from 2000 to 2022, systemic risk in significant industrial share markets significantly increased. From a static perspective, the period around 2019, affected by the COVID-19 pandemic, experienced the most drastic fluctuations. Compared to the year 2000, systemic risk entropy in 2022 increased nearly sixtyfold, further indicating an increasing instability within this complex network.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper highlights that metrics from the machine learning field (e.g., entropy and information gain) used to qualify a classifier model can be used to evaluate the effectiveness of separation systems. To evaluate the efficiency of separation systems and their operation units, entropy- and information gain-based metrics were developed. The receiver operating characteristic (ROC) curve is used to determine the optimal cut point in a separation system. The proposed metrics are verified by simulation experiments conducted on the stochastic model of a waste-sorting system. Machine learning classifier-based metrics has promising potential to gain information about the performance of separation systems. Industrial separation systems can be considered to perform a classification task. Initialized by this analogy, existing metrics from the machine learning field (e.g., entropy and information gain) to qualify a classifier can be used to evaluate the effectiveness of these systems. Our research investigates this idea generally, and also introduces a case study of an industrial manual waste-sorting system. The contributions of the paper are the following: (1) Overview of the possible applications of classifier-based metrics for process development aims. (2) Entropy and information gain are shown to be applicable to evaluate the efficiency of separation systems and their operation units as well. (3) Monte Carlo simulation is involved to produce robust results in a separation system with stochastic phenomena. (4) The ROC curve is shown to be applicable to determining the optimal cut point in a separation system. The ideas above are verified by simulation experiments conducted on the stochastic model of a waste-sorting system.
{"title":"Machine Learning Classifier-Based Metrics Can Evaluate the Efficiency of Separation Systems","authors":"Éva Kenyeres, Alex Kummer, János Abonyi","doi":"10.3390/e26070571","DOIUrl":"https://doi.org/10.3390/e26070571","url":null,"abstract":"This paper highlights that metrics from the machine learning field (e.g., entropy and information gain) used to qualify a classifier model can be used to evaluate the effectiveness of separation systems. To evaluate the efficiency of separation systems and their operation units, entropy- and information gain-based metrics were developed. The receiver operating characteristic (ROC) curve is used to determine the optimal cut point in a separation system. The proposed metrics are verified by simulation experiments conducted on the stochastic model of a waste-sorting system. Machine learning classifier-based metrics has promising potential to gain information about the performance of separation systems. Industrial separation systems can be considered to perform a classification task. Initialized by this analogy, existing metrics from the machine learning field (e.g., entropy and information gain) to qualify a classifier can be used to evaluate the effectiveness of these systems. Our research investigates this idea generally, and also introduces a case study of an industrial manual waste-sorting system. The contributions of the paper are the following: (1) Overview of the possible applications of classifier-based metrics for process development aims. (2) Entropy and information gain are shown to be applicable to evaluate the efficiency of separation systems and their operation units as well. (3) Monte Carlo simulation is involved to produce robust results in a separation system with stochastic phenomena. (4) The ROC curve is shown to be applicable to determining the optimal cut point in a separation system. The ideas above are verified by simulation experiments conducted on the stochastic model of a waste-sorting system.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yungpil Yoo, Sang-Yup Lee, Seok-Ho Seo, Si-Doek Oh, Ho-Young Kwak
Exergy analysis evaluates the efficiency of system components by quantifying the rate of entropy generation. In general, the exergy destruction rate or irreversibility rate was directly obtained through the exergy balance equation. However, this method cannot determine the origin of the component’s entropy generation rate, which is a very important factor in system design and improvement. In this study, a thorough energy, exergy, and thermoeconomic analysis of a proton-exchange membrane fuel cell (PEMFC) was performed, providing the heat transfer rate, entropy generation rate, and cost loss rate of each component. The irreversibility rate of each component was obtained by the Gouy–Stodola theorem. Detailed and extensive exergy and thermoeconomic analyses of the PEMFC system determined that water cooling units experience the greatest heat transfer among the components in the studied PEMFC system, resulting in the greatest irreversibility and, thus, the greatest monetary flow loss.
{"title":"Energy, Exergetic, and Thermoeconomic Analyses of Hydrogen-Fueled 1-kW Proton-Exchange Membrane Fuel Cell","authors":"Yungpil Yoo, Sang-Yup Lee, Seok-Ho Seo, Si-Doek Oh, Ho-Young Kwak","doi":"10.3390/e26070566","DOIUrl":"https://doi.org/10.3390/e26070566","url":null,"abstract":"Exergy analysis evaluates the efficiency of system components by quantifying the rate of entropy generation. In general, the exergy destruction rate or irreversibility rate was directly obtained through the exergy balance equation. However, this method cannot determine the origin of the component’s entropy generation rate, which is a very important factor in system design and improvement. In this study, a thorough energy, exergy, and thermoeconomic analysis of a proton-exchange membrane fuel cell (PEMFC) was performed, providing the heat transfer rate, entropy generation rate, and cost loss rate of each component. The irreversibility rate of each component was obtained by the Gouy–Stodola theorem. Detailed and extensive exergy and thermoeconomic analyses of the PEMFC system determined that water cooling units experience the greatest heat transfer among the components in the studied PEMFC system, resulting in the greatest irreversibility and, thus, the greatest monetary flow loss.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Álvaro Zabaleta-Ortega, Teobaldis Mercado-Fernández, Israel Reyes-Ramírez, Fernando Angulo-Brown, Lev Guzmán-Vargas
We study the statistical interdependence between daily precipitation and daily extreme temperature for regions of Mexico (14 climatic stations, period 1960–2020) and Colombia (7 climatic stations, period 1973–2020) using linear (cross-correlation and coherence) and nonlinear (global phase synchronization index, mutual information, and cross-sample entropy) synchronization metrics. The information shared between these variables is relevant and exhibits changes when comparing regions with different climatic conditions. We show that precipitation and temperature records from La Mojana are characterized by high persistence, while data from Mexico City exhibit lower persistence (less memory). We find that the information exchange and the level of coupling between the precipitation and temperature are higher for the case of the La Mojana region (Colombia) compared to Mexico City (Mexico), revealing that regions where seasonal changes are almost null and with low temperature gradients (less local variability) tend to display higher synchrony compared to regions where seasonal changes are very pronounced. The interdependence characterization between precipitation and temperature represents a robust option to characterize and analyze the collective dynamics of the system, applicable in climate change studies, as well as in changes not easily identifiable in future scenarios.
{"title":"Statistical Interdependence between Daily Precipitation and Extreme Daily Temperature in Regions of Mexico and Colombia","authors":"Álvaro Zabaleta-Ortega, Teobaldis Mercado-Fernández, Israel Reyes-Ramírez, Fernando Angulo-Brown, Lev Guzmán-Vargas","doi":"10.3390/e26070558","DOIUrl":"https://doi.org/10.3390/e26070558","url":null,"abstract":"We study the statistical interdependence between daily precipitation and daily extreme temperature for regions of Mexico (14 climatic stations, period 1960–2020) and Colombia (7 climatic stations, period 1973–2020) using linear (cross-correlation and coherence) and nonlinear (global phase synchronization index, mutual information, and cross-sample entropy) synchronization metrics. The information shared between these variables is relevant and exhibits changes when comparing regions with different climatic conditions. We show that precipitation and temperature records from La Mojana are characterized by high persistence, while data from Mexico City exhibit lower persistence (less memory). We find that the information exchange and the level of coupling between the precipitation and temperature are higher for the case of the La Mojana region (Colombia) compared to Mexico City (Mexico), revealing that regions where seasonal changes are almost null and with low temperature gradients (less local variability) tend to display higher synchrony compared to regions where seasonal changes are very pronounced. The interdependence characterization between precipitation and temperature represents a robust option to characterize and analyze the collective dynamics of the system, applicable in climate change studies, as well as in changes not easily identifiable in future scenarios.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article studies a class of uncertain nonlinear multiagent systems (MASs) with state restrictions. RBFNNs, or radial basis function neural networks, are utilized to estimate the uncertainty of the system. To approximate the unknown states and disturbances, the state observer and disturbance observer are proposed to resolve those issues. Moreover, a fast finite-time consensus control technique is suggested in order to accomplish fast finite-time stability without going against the full-state requirements. It is demonstrated that every signal could be stable and boundless, and an event-triggered controller is considered for the saving of resources. Ultimately, the simulated example demonstrates the validity of the developed approach.
{"title":"Fast Finite-Time Observer-Based Event-Triggered Consensus Control for Uncertain Nonlinear Multiagent Systems with Full-State Constraints","authors":"Kewei Zhou, Xin Wang","doi":"10.3390/e26070559","DOIUrl":"https://doi.org/10.3390/e26070559","url":null,"abstract":"This article studies a class of uncertain nonlinear multiagent systems (MASs) with state restrictions. RBFNNs, or radial basis function neural networks, are utilized to estimate the uncertainty of the system. To approximate the unknown states and disturbances, the state observer and disturbance observer are proposed to resolve those issues. Moreover, a fast finite-time consensus control technique is suggested in order to accomplish fast finite-time stability without going against the full-state requirements. It is demonstrated that every signal could be stable and boundless, and an event-triggered controller is considered for the saving of resources. Ultimately, the simulated example demonstrates the validity of the developed approach.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Information Causality principle was proposed to re-derive the Tsirelson bound, an upper limit on the strength of quantum correlations, and has been suggested as a candidate law of nature. The principle states that the Shannon information about Alice’s distant database gained by Bob after receiving an m bit message cannot exceed m bits, even when Alice and Bob share non-local resources. As originally formulated, it can be shown that the principle is violated exactly when the strength of the shared correlations exceeds the Tsirelson bound. However, we demonstrate here that when an alternative measure of information, one of the Renyi measures, is chosen, the Information Causality principle no longer arrives at the correct value for the Tsirelson bound. We argue that neither the assumption of particular `intuitive’ properties of uncertainties measures, nor pragmatic choices about how to optimise costs associated with communication, are sufficient to motivate uniquely the choice of the Shannon measure from amongst the more general Renyi measures. We conclude that the dependence of the success of Information Causality on mere convention undermines its claimed significance as a foundational principle.
信息因果关系原理的提出是为了重新推导出齐雷尔森约束(量子相关性强度的上限),并被认为是一种候选的自然法则。该原理指出,即使爱丽丝和鲍勃共享非本地资源,鲍勃在接收到 m 位信息后获得的关于爱丽丝远方数据库的香农信息也不能超过 m 位。按照最初的表述,可以证明当共享相关性的强度超过齐雷尔森约束时,就违反了这一原则。然而,我们在此证明,当选择另一种信息度量方法,即 Renyi 度量方法之一时,信息因果关系原理不再能得出正确的 Tsirelson 约束值。我们认为,无论是对不确定性度量的特定 "直觉 "属性的假设,还是对如何优化通信相关成本的实用选择,都不足以唯一地促使我们从更一般的任义度量中选择香农度量。我们的结论是,信息因果关系的成功仅仅依赖于约定俗成,这有损于它作为基本原则所宣称的意义。
{"title":"Bounding Quantum Correlations: The Role of the Shannon Information in the Information Causality Principle","authors":"Natasha Oughton, Christopher G. Timpson","doi":"10.3390/e26070562","DOIUrl":"https://doi.org/10.3390/e26070562","url":null,"abstract":"The Information Causality principle was proposed to re-derive the Tsirelson bound, an upper limit on the strength of quantum correlations, and has been suggested as a candidate law of nature. The principle states that the Shannon information about Alice’s distant database gained by Bob after receiving an m bit message cannot exceed m bits, even when Alice and Bob share non-local resources. As originally formulated, it can be shown that the principle is violated exactly when the strength of the shared correlations exceeds the Tsirelson bound. However, we demonstrate here that when an alternative measure of information, one of the Renyi measures, is chosen, the Information Causality principle no longer arrives at the correct value for the Tsirelson bound. We argue that neither the assumption of particular `intuitive’ properties of uncertainties measures, nor pragmatic choices about how to optimise costs associated with communication, are sufficient to motivate uniquely the choice of the Shannon measure from amongst the more general Renyi measures. We conclude that the dependence of the success of Information Causality on mere convention undermines its claimed significance as a foundational principle.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}