This paper does not aim to prove new mathematical theorems or claim a fundamental unification of physics and information, but rather to provide a new pedagogical framework for interpreting foundational results in algorithmic information theory. Our focus is on understanding the profound connection between entropy and Kolmogorov complexity. We achieve this by applying these concepts to a physical model. Our work is centered on the distinction, first articulated by Boltzmann, between observable low-complexity macrostates and unobservable high-complexity microstates. We re-examine the known relationships linking complexity and probability, as detailed in works like Li and Vitányi's An Introduction to Kolmogorov Complexity and Its Applications. Our contribution is to explicitly identify the abstract complexity of a probability distribution K(ρ) with the concrete physical complexity of a macrostate K(M). Using this framework, we explore the "Not Alone" principle, which states that a high-complexity microstate must belong to a large cluster of peers sharing the same simple properties. We show how this result is a natural consequence of our physical framework, thus providing a clear intuitive model for understanding how algorithmic information imposes structural constraints on physical systems. We end by exploring concrete properties in physics, resolving a few apparent paradoxes, and revealing how these laws are the statistical consequences of simple rules.
{"title":"A Physical Framework for Algorithmic Entropy.","authors":"Jeff Edmonds","doi":"10.3390/e28010061","DOIUrl":"10.3390/e28010061","url":null,"abstract":"<p><p>This paper does not aim to prove new mathematical theorems or claim a fundamental unification of physics and information, but rather to provide a new pedagogical framework for interpreting foundational results in algorithmic information theory. Our focus is on understanding the profound connection between entropy and Kolmogorov complexity. We achieve this by applying these concepts to a physical model. Our work is centered on the distinction, first articulated by Boltzmann, between observable low-complexity macrostates and unobservable high-complexity microstates. We re-examine the known relationships linking complexity and probability, as detailed in works like Li and Vitányi's <i>An Introduction to Kolmogorov Complexity and Its Applications</i>. Our contribution is to explicitly identify the abstract complexity of a probability distribution K(ρ) with the concrete physical complexity of a macrostate K(M). Using this framework, we explore the \"Not Alone\" principle, which states that a high-complexity microstate must belong to a large cluster of peers sharing the same simple properties. We show how this result is a natural consequence of our physical framework, thus providing a clear intuitive model for understanding how algorithmic information imposes structural constraints on physical systems. We end by exploring concrete properties in physics, resolving a few apparent paradoxes, and revealing how these laws are the statistical consequences of simple rules.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our global society is now tightly integrated with an interwoven technological web-the so-called digital fabric [...].
我们的全球社会现在与一个相互交织的技术网络——所谓的数字结构——紧密结合在一起。
{"title":"Wireless Communications: Signal Processing Perspectives.","authors":"Sébastien Roy","doi":"10.3390/e28010060","DOIUrl":"10.3390/e28010060","url":null,"abstract":"<p><p>Our global society is now tightly integrated with an interwoven technological web-the so-called digital fabric [...].</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The valuation of companies has long been a cornerstone of financial analysis and investment decision-making, offering critical frameworks for investors to gauge a firm's worth and evaluate the relative value of future income streams within a specific industry or sector. In this work we propose a new valuation framework by integrating traditional and modern valuation approaches, providing actionable insights for investors and analysts seeking to optimize asset allocation and portfolio performance. We introduce a novel framework (TARFA) to comparable company valuation by identifying investor-preferred return-driving points for accounting-based factors. Through an analysis of 68 commonly used accounting measures, the study identifies three key factors that drive superior returns. The results of the TARFA framework demonstrate that both general and sector-specific models consistently outperformed population returns, with the general model showing superior performance in broader market contexts. The study also highlights the stability of key financial ratios over time and introduces the Relative Equity Score, further enhancing the model's ability to identify undervalued equities.
{"title":"TARFA: A Novel Approach to Targeted Accounting Range Factor Analysis for Asset Allocation.","authors":"Jose Juan de Leon, Francesca Medda","doi":"10.3390/e28010052","DOIUrl":"10.3390/e28010052","url":null,"abstract":"<p><p>The valuation of companies has long been a cornerstone of financial analysis and investment decision-making, offering critical frameworks for investors to gauge a firm's worth and evaluate the relative value of future income streams within a specific industry or sector. In this work we propose a new valuation framework by integrating traditional and modern valuation approaches, providing actionable insights for investors and analysts seeking to optimize asset allocation and portfolio performance. We introduce a novel framework (TARFA) to comparable company valuation by identifying investor-preferred return-driving points for accounting-based factors. Through an analysis of 68 commonly used accounting measures, the study identifies three key factors that drive superior returns. The results of the TARFA framework demonstrate that both general and sector-specific models consistently outperformed population returns, with the general model showing superior performance in broader market contexts. The study also highlights the stability of key financial ratios over time and introduces the Relative Equity Score, further enhancing the model's ability to identify undervalued equities.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When training a neural network, the choice of activation function can greatly impact its performance. A function with a larger derivative may cause the coefficients of the latter layers to deviate further from the calculated direction, making deep learning more difficult to train. However, an activation function with a derivative amplitude of less than one can result in the problem of a vanishing gradient. To overcome this drawback, we propose the application of pseudo-normalization to enlarge some gradients by dividing them by the root mean square. This amplification is performed every few layers to ensure that the amplitudes are larger than one, thus avoiding the condition of vanishing gradient and preventing gradient explosion. We successfully applied this approach to several deep learning networks with hyperbolic tangent activation for image classifications. To gain a deeper understanding of the algorithm, we employed interpretability techniques to examine the network's prediction outcomes. We discovered that, in contrast to popular networks that learn picture characteristics, the networks primarily employ the contour information of images for categorization. This suggests that our technique can be utilized in addition to other widely used algorithms.
{"title":"Mitigating the Vanishing Gradient Problem Using a Pseudo-Normalizing Method.","authors":"Yun Bu, Wenbo Jiang, Gang Lu, Qiang Zhang","doi":"10.3390/e28010057","DOIUrl":"10.3390/e28010057","url":null,"abstract":"<p><p>When training a neural network, the choice of activation function can greatly impact its performance. A function with a larger derivative may cause the coefficients of the latter layers to deviate further from the calculated direction, making deep learning more difficult to train. However, an activation function with a derivative amplitude of less than one can result in the problem of a vanishing gradient. To overcome this drawback, we propose the application of pseudo-normalization to enlarge some gradients by dividing them by the root mean square. This amplification is performed every few layers to ensure that the amplitudes are larger than one, thus avoiding the condition of vanishing gradient and preventing gradient explosion. We successfully applied this approach to several deep learning networks with hyperbolic tangent activation for image classifications. To gain a deeper understanding of the algorithm, we employed interpretability techniques to examine the network's prediction outcomes. We discovered that, in contrast to popular networks that learn picture characteristics, the networks primarily employ the contour information of images for categorization. This suggests that our technique can be utilized in addition to other widely used algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Horacio S Wio, Roberto R Deza, Jorge A Revelli, Rafael Gallego, Reinaldo García-García, Miguel A Rodríguez
Interfaces of rather different natures-as, e.g., bacterial colony or forest fire boundaries, or semiconductor layers grown by different methods (MBE, sputtering, etc.)-are self-affine fractals, and feature scaling with universal exponents (depending on the substrate's dimensionality d and global topology, as well as on the driving randomness' spatial and temporal correlations but not on the underlying mechanisms). Adding lateral growth as an essential (non-equilibrium) ingredient to the known equilibrium ones (randomness and interface relaxation), the Kardar-Parisi-Zhang (KPZ) equation succeeded in finding (via the dynamic renormalization group) the correct exponents for flat d=1 substrates and (spatially and temporally) uncorrelated randomness. It is this interplay which gives rise to the unique, non-Gaussian scaling properties characteristic of the specific, universal type of non-equilibrium roughening. Later on, the asymptotic statistics of process h(x) fluctuations in the scaling regime was also analytically found for d=1 substrates. For d>1 substrates, however, one has to rely on numerical simulations. Here we review a variational approach that allows for analytical progress regardless of substrate dimensionality. After reviewing our previous numerical results in d=1, 2, and 3 on the time evolution of one of the functionals-which we call the non-equilibrium potential (NEP)-as well as its scaling behavior with the nonlinearity parameter λ, we discuss the stochastic thermodynamics of the roughening process and the memory of process h(x) in KPZ and in the related Golubović-Bruinsma (GB) model, providing numerical evidence for the significant dependence on initial conditions of the NEP's asymptotic behavior in both models. Finally, we highlight some open questions.
不同性质的界面-例如,菌落或森林火灾边界,或通过不同方法(MBE,溅射等)生长的半导体层-是自仿射分形,并具有通用指数的特征缩放(取决于衬底的维数d和全局拓扑,以及驱动随机性的空间和时间相关性,但不取决于潜在机制)。将横向增长作为基本(非平衡)成分添加到已知的平衡成分(随机性和界面松弛)中,kardar - paris - zhang (KPZ)方程成功地(通过动态重整化群)找到了平面d=1衬底和(空间和时间)不相关随机性的正确指数。正是这种相互作用产生了特殊的、普遍类型的非平衡粗化的独特的、非高斯的标度特性。随后,对于d=1底物,也解析地发现了标度区过程h(x)波动的渐近统计量。然而,对于dbbb101衬底,我们必须依靠数值模拟。在这里,我们回顾了一种变分方法,允许无论底物维度的分析进展。在回顾了我们之前在d=1、2和3中关于其中一个函数(我们称之为非平衡势(NEP))的时间演化的数值结果以及它与非线性参数λ的标度行为之后,我们讨论了KPZ和相关Golubović-Bruinsma (GB)模型中粗化过程的随机热力学和过程h(x)的记忆。为两种模型中NEP的渐近行为对初始条件的显著依赖提供了数值证据。最后,我们强调一些悬而未决的问题。
{"title":"The KPZ Equation of Kinetic Interface Roughening: A Variational Perspective.","authors":"Horacio S Wio, Roberto R Deza, Jorge A Revelli, Rafael Gallego, Reinaldo García-García, Miguel A Rodríguez","doi":"10.3390/e28010055","DOIUrl":"10.3390/e28010055","url":null,"abstract":"<p><p>Interfaces of rather different natures-as, e.g., bacterial colony or forest fire boundaries, or semiconductor layers grown by different methods (MBE, sputtering, etc.)-are self-affine fractals, and feature scaling with <i>universal</i> exponents (depending on the substrate's dimensionality <i>d</i> and global topology, as well as on the driving randomness' spatial and temporal correlations but <i>not</i> on the underlying mechanisms). Adding lateral growth as an essential (non-equilibrium) ingredient to the known equilibrium ones (randomness and interface relaxation), the Kardar-Parisi-Zhang (KPZ) equation succeeded in finding (via the dynamic renormalization group) the correct exponents for flat d=1 substrates and (spatially and temporally) uncorrelated randomness. It is this <i>interplay</i> which gives rise to the unique, non-Gaussian scaling properties characteristic of the specific, universal type of non-equilibrium roughening. Later on, the asymptotic statistics of process h(x) fluctuations in the scaling regime was also analytically found for d=1 substrates. For d>1 substrates, however, one has to rely on numerical simulations. Here we review a variational approach that allows for analytical progress regardless of substrate dimensionality. After reviewing our previous numerical results in d=1, 2, and 3 on the time evolution of one of the functionals-which we call the <i>non-equilibrium potential</i> (NEP)-as well as its scaling behavior with the nonlinearity parameter λ, we discuss the stochastic thermodynamics of the roughening process and the memory of process h(x) in KPZ and in the related Golubović-Bruinsma (GB) model, providing numerical evidence for the significant dependence on initial conditions of the NEP's asymptotic behavior in both models. Finally, we highlight some open questions.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyue Liu, Binghui Guo, Ziqiao Yin, Zhilong Mi, Donghui Jin
Robustness to distributional shifts remains a critical limitation for deploying deep neural networks (DNNs) in real-world applications. While DNNs excel in standard benchmarks, their performance often deteriorates under unseen or perturbed conditions. Understanding how internal information representations relate to such robustness remains underexplored. In this work, we propose an interpretable framework for robustness assessment based on partial information decomposition (PID), which quantifies how neurons redundantly, uniquely, or synergistically encode task-relevant information. Analysis of PID measures computed from clean inputs reveals that models characterized by higher redundancy rates and lower synergy rates tend to maintain more stable performance under various natural corruptions. Additionally, a higher rate of unique information is positively associated with improved classification accuracy on the data from which the measure is computed. These findings provide new insights for understanding and comparing model behavior through internal information analysis, and highlight the feasibility of lightweight robustness assessment without requiring extensive access to corrupted data.
{"title":"Interpreting Performance of Deep Neural Networks with Partial Information Decomposition.","authors":"Tianyue Liu, Binghui Guo, Ziqiao Yin, Zhilong Mi, Donghui Jin","doi":"10.3390/e28010050","DOIUrl":"10.3390/e28010050","url":null,"abstract":"<p><p>Robustness to distributional shifts remains a critical limitation for deploying deep neural networks (DNNs) in real-world applications. While DNNs excel in standard benchmarks, their performance often deteriorates under unseen or perturbed conditions. Understanding how internal information representations relate to such robustness remains underexplored. In this work, we propose an interpretable framework for robustness assessment based on partial information decomposition (PID), which quantifies how neurons redundantly, uniquely, or synergistically encode task-relevant information. Analysis of PID measures computed from clean inputs reveals that models characterized by higher redundancy rates and lower synergy rates tend to maintain more stable performance under various natural corruptions. Additionally, a higher rate of unique information is positively associated with improved classification accuracy on the data from which the measure is computed. These findings provide new insights for understanding and comparing model behavior through internal information analysis, and highlight the feasibility of lightweight robustness assessment without requiring extensive access to corrupted data.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839711/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remigiusz Smoliński, Piotr Frąckiewicz, Krzysztof Grzanka, Marek Szopa
This paper applies quantum game theory to three ethical dilemmas that frequently arise in negotiation: cooperation versus competition, self-interest versus equity, and honesty versus deception. Using quantum extensions of selected games such as the Prisoner's Dilemma, the Ultimatum Game, the Battle of the Sexes, and the Buyer-Seller Game, we examine whether quantization can generate equilibria that improve classical outcomes while also aligning more closely with ethical principles such as fairness, cooperation, and honesty. The analysis shows that quantum strategies, through entanglement and superposition, can sustain cooperative, fair, or honest behaviour as stable equilibria, outcomes that are typically unstable or unattainable in classical settings. The specific outcomes depend on the chosen quantization method, but across cases, the analysis consistently shows that quantum formulations expand the range of solutions in which efficiency and ethical principles co-exist.
{"title":"Quantum Negotiation Games: Toward Ethical Equilibria.","authors":"Remigiusz Smoliński, Piotr Frąckiewicz, Krzysztof Grzanka, Marek Szopa","doi":"10.3390/e28010051","DOIUrl":"10.3390/e28010051","url":null,"abstract":"<p><p>This paper applies quantum game theory to three ethical dilemmas that frequently arise in negotiation: cooperation versus competition, self-interest versus equity, and honesty versus deception. Using quantum extensions of selected games such as the Prisoner's Dilemma, the Ultimatum Game, the Battle of the Sexes, and the Buyer-Seller Game, we examine whether quantization can generate equilibria that improve classical outcomes while also aligning more closely with ethical principles such as fairness, cooperation, and honesty. The analysis shows that quantum strategies, through entanglement and superposition, can sustain cooperative, fair, or honest behaviour as stable equilibria, outcomes that are typically unstable or unattainable in classical settings. The specific outcomes depend on the chosen quantization method, but across cases, the analysis consistently shows that quantum formulations expand the range of solutions in which efficiency and ethical principles co-exist.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denis A Drozhzhin, Evgeniy O Kiktenko, Aleksey K Fedorov, Anastasiia S Nikolaeva
Quantum computation with d-level quantum systems, also known as qudits, benefits from the possibility to use a richer computational space compared to qubits. However, for an arbitrary qudit-based hardware platform, the issue is that a generic qudit operation has to be decomposed into the sequence of native operations-pulses that are adjusted to the transitions between two levels in a qudit. Typically, not all levels in a qudit are simply connected to each other due to specific selection rules. Moreover, the number of pulses plays a significant role, since each pulse takes a certain execution time and may introduce error. In this paper, we propose a resource-efficient algorithm to decompose single-qudit operations into the sequence of pulses that are allowed by qudit selection rules. Using the developed algorithm, the number of pulses is at most d(d-1)/2 for an arbitrary single-qudit operation. For specific operations, the algorithm could produce even fewer pulses. We provide a comparison of qudit decompositions for several types of trapped ions, specifically Yb+171, Ba+137 and Ca+40 with different selection rules, and also decomposition for superconducting qudits. Although our approach deals with single-qudit operations, the proposed approach is important for realizing two-qudit operations since they can be implemented as a standard two-qubit gate that is surrounded by efficiently implemented single-qudit gates.
{"title":"Transition-Aware Decomposition of Single-Qudit Gates.","authors":"Denis A Drozhzhin, Evgeniy O Kiktenko, Aleksey K Fedorov, Anastasiia S Nikolaeva","doi":"10.3390/e28010056","DOIUrl":"10.3390/e28010056","url":null,"abstract":"<p><p>Quantum computation with <i>d</i>-level quantum systems, also known as qudits, benefits from the possibility to use a richer computational space compared to qubits. However, for an arbitrary qudit-based hardware platform, the issue is that a generic qudit operation has to be decomposed into the sequence of native operations-pulses that are adjusted to the transitions between two levels in a qudit. Typically, not all levels in a qudit are simply connected to each other due to specific selection rules. Moreover, the number of pulses plays a significant role, since each pulse takes a certain execution time and may introduce error. In this paper, we propose a resource-efficient algorithm to decompose single-qudit operations into the sequence of pulses that are allowed by qudit selection rules. Using the developed algorithm, the number of pulses is at most d(d-1)/2 for an arbitrary single-qudit operation. For specific operations, the algorithm could produce even fewer pulses. We provide a comparison of qudit decompositions for several types of trapped ions, specifically Yb+171, Ba+137 and Ca+40 with different selection rules, and also decomposition for superconducting qudits. Although our approach deals with single-qudit operations, the proposed approach is important for realizing two-qudit operations since they can be implemented as a standard two-qubit gate that is surrounded by efficiently implemented single-qudit gates.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840474/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the Multistage Bipolar method is developed. The paper presents a synthesis of three streams related to multiple criteria decision-making: the reference point-based approach, the interactive approach and multistage decision processes. A significant problem, the solution of which is a prerequisite for the application of the Multistage Bipolar method, is the determination of the sets of reference objects for subsequent stages. This paper addresses the question of how to utilize an interactive multi-criteria approach to select subsets of 'good' and 'bad' objects for each stage of the considered process, which the decision-maker will accept as sets of reference objects. Its objective is to propose an interactive procedure for generating these sets. The approach proposed in this paper is illustrated by the utilization of stage sets of reference points, generated via the proposed interactive procedure, within a mathematical model for resource allocation in a multistage regional development planning problem. The problem addressed constitutes a mathematical economics model, while simultaneously demonstrating that multi-criteria methods are widely applicable in management. Of fundamental importance here is the expenditure of public funds in a manner that yields maximum benefits for citizens.
{"title":"Interactive Selection of Reference Sets in Multistage Bipolar Method.","authors":"Maciej Nowak, Tadeusz Trzaskalik","doi":"10.3390/e28010054","DOIUrl":"10.3390/e28010054","url":null,"abstract":"<p><p>In this paper, the Multistage Bipolar method is developed. The paper presents a synthesis of three streams related to multiple criteria decision-making: the reference point-based approach, the interactive approach and multistage decision processes. A significant problem, the solution of which is a prerequisite for the application of the Multistage Bipolar method, is the determination of the sets of reference objects for subsequent stages. This paper addresses the question of how to utilize an interactive multi-criteria approach to select subsets of 'good' and 'bad' objects for each stage of the considered process, which the decision-maker will accept as sets of reference objects. Its objective is to propose an interactive procedure for generating these sets. The approach proposed in this paper is illustrated by the utilization of stage sets of reference points, generated via the proposed interactive procedure, within a mathematical model for resource allocation in a multistage regional development planning problem. The problem addressed constitutes a mathematical economics model, while simultaneously demonstrating that multi-criteria methods are widely applicable in management. Of fundamental importance here is the expenditure of public funds in a manner that yields maximum benefits for citizens.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840201/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We review aspects of entanglement entropy in the quantum mechanics of N×N matrices, i.e., matrix quantum mechanics (MQM), at large N. In doing so, we review standard models of MQM and their relation to string theory, D-brane physics, and emergent non-commutative geometries. We overview, in generality, definitions of subsystems and entanglement entropies in theories with gauge redundancy and discuss the additional structure required for definining subsystems in MQMs possessing a U(N) gauge redundancy. In connecting these subsystems to non-commutative geometry, we review several works on 'target space entanglement,' and entanglement in non-commutative field theories, highlighting the conditions in which target space entanglement entropy displays an 'area law' at large N. We summarize several example calculations of entanglement entropy in non-commutative geometries and MQMs. We review recent work in connecting the area law entanglement of MQM to the Ryu-Takayanagi formula, highlighting the conditions in which U(N) invariance implies a minimal area formula for the entanglement entropy at large N. Finally, we make comments on open questions and research directions.
{"title":"Matrix Quantum Mechanics and Entanglement Entropy: A Review.","authors":"Jackson R Fliss, Alexander Frenkel","doi":"10.3390/e28010058","DOIUrl":"10.3390/e28010058","url":null,"abstract":"<p><p>We review aspects of entanglement entropy in the quantum mechanics of N×N matrices, i.e., matrix quantum mechanics (MQM), at large <i>N</i>. In doing so, we review standard models of MQM and their relation to string theory, D-brane physics, and emergent non-commutative geometries. We overview, in generality, definitions of subsystems and entanglement entropies in theories with gauge redundancy and discuss the additional structure required for definining subsystems in MQMs possessing a U(N) gauge redundancy. In connecting these subsystems to non-commutative geometry, we review several works on 'target space entanglement,' and entanglement in non-commutative field theories, highlighting the conditions in which target space entanglement entropy displays an 'area law' at large <i>N</i>. We summarize several example calculations of entanglement entropy in non-commutative geometries and MQMs. We review recent work in connecting the area law entanglement of MQM to the Ryu-Takayanagi formula, highlighting the conditions in which U(N) invariance implies a minimal area formula for the entanglement entropy at large <i>N</i>. Finally, we make comments on open questions and research directions.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}