The contributions collected in this Special Issue testify to the continued vitality and conceptual breadth of research at the intersection of natural language, complexity science, and information theory [...].
The contributions collected in this Special Issue testify to the continued vitality and conceptual breadth of research at the intersection of natural language, complexity science, and information theory [...].
The Mpemba effect, where a hotter system can enter a cold phase faster than a cooler one, remains a counterintuitive phenomenon whose origins are still being unraveled. In this work, we propose and demonstrate a simple and general mechanism for the genuine, phase transition-driven Mpemba effect. Our mechanism requires only a single order parameter to describe the system's state and operates within a standard Markovian framework, distinguishing it from previous models that necessitate multiple order parameters or non-Markovian dynamics. The core of the effect lies in the distinct relaxation pathways following a sudden quench: a system prepared at a higher initial temperature may be projected onto a region of the final free-energy landscape that requires it to cross fewer energy barriers to reach the stable low-temperature phase, whereas a system prepared at an intermediate temperature may be trapped in a metastable state, requiring the crossing of multiple barriers. We concretely illustrate this mechanism using the extended spin-1 Nagle-Kardar model, where an appropriate choice of parameters yields the requisite free-energy topography. Through extensive Monte Carlo simulations, we confirm that the initially hot system consistently reaches the final ferromagnetic phase in less time than its initially warm counterpart, thereby exhibiting a robust Mpemba effect. Our findings provide a minimal and clear explanation for how the initial state's position in order parameter space can dictate the kinetics of a first-order phase transition, leading to this anomalous acceleration of cooling.
This paper addresses the signal detection problem in orthogonal frequency division multiplexing with index modulation (OFDM-IM) systems using deep learning (DL) techniques. In particular, a DL-based detector termed FullTrans-IM is proposed, which integrates the Transformer architecture with long short-term memory (LSTM) networks. Unlike conventional methods that treat signal detection as a classification task, the proposed approach reformulates it as a sequence prediction problem by exploiting the sequence modeling capability of the Transformer's decoder rather than relying solely on the encoder. This formulation enables the detector to effectively learn channel characteristics and modulation patterns, thereby improving detection accuracy and robustness. Simulation results demonstrate that the proposed FullTrans-IM detector achieves superior bit error rate (BER) performance compared with conventional methods such as zero-forcing (ZF) and existing DL-based detectors under Rayleigh fading channels.
A scale-dependent effective temperature emerges as a unifying principle in the statistical physics of apparently different phenomena, namely quantum confinement in finite-size systems and non-equilibrium effects in thermodynamic systems. This concept effectively maps these inherently complex systems onto equilibrium states, thereby enabling the direct application of standard statistical physics methods. By offering a framework to analyze these systems as effectively at equilibrium, our approach provides powerful new tools that significantly expand the scope of the field. Just as the constant speed of light in Einstein's theory of special relativity necessitates a relative understanding of space and time, our fixed ratio of energy to temperature suggests a fundamental rescaling of both quantities that allows us to recognize shared patterns across diverse materials and situations.
Theories of emergent gravity have established a deep connection between entropy and the geometry of spacetime by looking at the latter through a thermodynamic lens. In this framework, the macroscopic properties of gravity arise in a statistical way from an effective small-scale discrete structure of spacetime and its information content. In this review, we begin by outlining how theories of quantum gravity imply the existence of a minimum length of spacetime as a general feature. We then describe how such a structure can be implemented in a way that is independent from the details of the quantum fluctuations of spacetime via a bi-tensorial quantum metric qαβ(x,x') that yields a finite geodesic distance in the coincidence limit x→x'. Finally, we discuss how the entropy encoded by these microscopic degrees of freedom can give rise to the field equations for gravity through a thermodynamic variational principle.
We investigate the asymptotic maximum value and convergence of the Voronoi Entropy (VE) for a 2D random point process (S = 1.690 ± 0.001) and point sets with long-range order characterized by hyperuniformity. We find that for the number of polygons of about n > 100, the VE range is between S = 0 (ordered set of seed points) and S = 1.69 (random set of seed points). For circular regions with the dimensionless radius R normalized by the average distance between points, we identify two limits: Limit-1 (R = 2.5, 16 ± 6 points) is the minimum radius, for which it is possible to construct a Voronoi diagram, and Limit-2 (R = 5.5, 96 ± 6 points) at which the VE reaches the saturation level. We also discuss examples of seed point patterns for which the values of VE exceed the asymptotic value of S > 1.69. While the VE accounts only for neighboring polygons, covering the 2D plane imposes constraints on the number of polygons and the number of edges in polygons. Consequently, unlike the conventional Shannon Entropy, the VE captures some long-range order properties of the system. We calculate the VE for several hyperuniform sets of points and compare it with the values of exponents of collective density variables characterizing long-range correlations in the system. We show that the VE correlates with the latter up to a certain saturation level, after which the value of the VE falls to S = 0, and we explain this phenomenon.
This work introduces a path-dependent energy Lagrangian for irreversible thermomechanics that embeds heat and entropy accounting directly into the action. The formulation requires neither Lagrange multipliers nor Rayleigh potentials. An explicit θs term enforces Helmholtz conjugacy and positive heat capacity; writing heat as a divergence produces the natural flux; nonnegative dissipative productions are collected in a single modular term; and a history integral supplies an upper-limit variation that converts instantaneous power into entropy production. Stationarity yields the standard field equations together with a global entropy balance and a channel-wise power identity by placing each production once in entropy and once, with opposite sign, in its own channel. Classical closures-including Fourier and non-Fourier heat conduction, diffusion, and viscous mechanics-arise as special cases of the same functional. Compact examples show how the framework provides a unified action, a single entropy audit, and consistent positive production across coupled dissipative mechanisms.
This work explores late-time gravitational collapse using timelike thin-shell methods in classical general relativity. A junction surface separates a regular de Sitter interior from a Schwarzschild or Schwarzschild-de Sitter exterior in a post-transient regime with fixed exterior mass M (ADM for Λ+=0), modelling a vacuum-energy core surrounded by an asymptotically classical spacetime. The configuration admits a natural thermodynamic interpretation based on a geometric area functional Sshell∝R2 and Tolman redshift, both derived from classical junction conditions and used as an entropy-like coarse-grained quantity rather than a fundamental statistical entropy. Key results include (i) identification of a deceleration mechanism at the balance radius Rthr=(3M/Λ-)1/3 for linear surface equations of state p=wσ; (ii) classification of the allowable radial domain V(R)≤0 for outward evolution; (iii) bounded curvature invariants throughout the shell-supported spacetime region; and (iv) a mass-scaled frequency bound fcRS≤ξ/(33π) for persistent near-shell spectral modes. All predictions follow from standard Israel junction techniques and provide concrete observational tests. The framework offers an analytically tractable example of regular thin-shell collapse dynamics within classical general relativity, with implications for alternative compact object scenarios.
We focus on a queueing model in which the sizes of arriving jobs are stochastically dependent and each job may be denied service with a probability determined by the queue size (active management). Both of these effects are known to occur in computer networking and many other real-world realizations of queueing systems. For such a model, we perform a thorough transient and stationary analysis of the job departure process and the job rejection process. The results include theorems on the expected number of jobs that depart within a specified time interval, the departure intensity at a given time, the stationary departure rate, the expected number of jobs rejected within a specified interval, the transient rejection intensity and the stationary rejection rate. Sample numerical calculations are provided for illustration. They include various settings of the level of dependence between jobs, job rejection probabilities, and system load, as well as their impact on the departure and rejection processes.
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays a crucial role in determining overall system susceptibility to environmental noise, yet systematic characterization of this relationship has been hindered by computational complexity. We develop a machine learning framework that bridges graph features with quantum device characterization to predict decoherence lifetime directly from connectivity patterns. By representing quantum architectures as connected graphs and using 14 topological features as input to supervised learning models, we achieve accurate lifetime predictions with R2>0.96 for both superconducting and semiconductor platforms. Our analysis reveals fundamentally distinct decoherence mechanisms: superconducting qubits show high sensitivity to global connectivity measures (betweenness centrality δ1=0.484, spectral entropy δ1=0.480), while semiconductor quantum dots exhibit exceptional sensitivity to system scale (node count δ2=0.919, importance = 1.860). The complete failure of cross-platform model transfer (R2 scores of -0.39 and -433.60) emphasizes the platform-specific nature of optimal connectivity design. Our approach enables rapid assessment of quantum architectures without expensive simulations, providing practical guidance for noise-optimized quantum processor design.

