Pub Date : 2026-01-12DOI: 10.1016/j.physa.2026.131280
Argimiro Arratia , Mahmoud El Daou , Henryk Gzyl
This work introduces a variation on the theme of the classical linear classification problem, extending it to the separation of data by non-linear polynomial hypersurfaces, which allows for more complex decision boundaries. The classification problem, central to machine learning since the perceptron model, is transformed into an ill-posed, linear inverse problem with convex constraints.
We solve this using an entropy minimization procedure. Our approach differs from traditional setups as we do not pre-specify the measure of separation for the training data. Instead, the solution itself yields this measure, quantifying the degree of non-separation between the training classes. Crucially, the method intrinsically provides a way to define a region of undecidability (or uncertainty), where points that fall within that region cannot be classified with certainty.
This entropic method offers a robust alternative (stable under small data perturbations) to traditional linear or quadratic optimization techniques like Support Vector Machines (SVMs). Furthermore, we explicitly compare our entropic method against classification using Convolutional Neural Networks (CNNs), a prominent class of gradient descent based models. Numerical experiments on diverse datasets, including linear and non-linear cases, demonstrate the efficiency and versatility of the method, plus its competitive performance against both established classical techniques and modern deep learning approaches.
{"title":"Classification by separating hypersurfaces: An entropic approach","authors":"Argimiro Arratia , Mahmoud El Daou , Henryk Gzyl","doi":"10.1016/j.physa.2026.131280","DOIUrl":"10.1016/j.physa.2026.131280","url":null,"abstract":"<div><div>This work introduces a variation on the theme of the classical linear classification problem, extending it to the separation of data by non-linear polynomial hypersurfaces, which allows for more complex decision boundaries. The classification problem, central to machine learning since the perceptron model, is transformed into an ill-posed, linear inverse problem with convex constraints.</div><div>We solve this using an entropy minimization procedure. Our approach differs from traditional setups as we do not pre-specify the measure of separation for the training data. Instead, the solution itself yields this measure, quantifying the degree of non-separation between the training classes. Crucially, the method intrinsically provides a way to define a region of undecidability (or uncertainty), where points that fall within that region cannot be classified with certainty.</div><div>This entropic method offers a robust alternative (stable under small data perturbations) to traditional linear or quadratic optimization techniques like Support Vector Machines (SVMs). Furthermore, we explicitly compare our entropic method against classification using Convolutional Neural Networks (CNNs), a prominent class of gradient descent based models. Numerical experiments on diverse datasets, including linear and non-linear cases, demonstrate the efficiency and versatility of the method, plus its competitive performance against both established classical techniques and modern deep learning approaches.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131280"},"PeriodicalIF":3.1,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.physa.2026.131287
Muhammad Abraar Abhirama , Teguh Dartanto , Acep Purqon
The distribution of income constitutes a fundamental indicator of economic development, forming the empirical basis for assessing inequality through quantitative measures such as the Gini coefficient. As a contribution to the field of econophysics, this study develops an analytical model of income distribution for the Lorenz curve — a mathematical representation of income inequality — based on the Fermi–Dirac (FD) distribution. The purposes of this research are: proving the analogy between household income distribution and particle distribution using the thermodynamics-based and statistical mechanics-based approach, determining the fitness level of the FD distribution in modeling the income distribution, and visualizing the similar distribution behavior between the Lorenz curve and the FD Distribution. We performed comparative analyses along with household income data from countries with diverse economic characteristics, including the large-size developed country (USA), the small-size developed country (Singapore), and the large-size developing country (Indonesia), sourced from each country’s official statistics bureau. Note that the Indonesian household income dataset is proxied by household consumption expenditure, a common practice found in developing countries, raising its own intriguing question of whether the FD modeling would be robust under this proxy approach. The results then show that, regardless of the household income or expenditure data approach, the fitting model effectively fits the household income data within these countries. Through this work, we show that a direct modeling of Lorenz curve by the FD model is possible, and is to be further developed afterwards.
{"title":"The Fermi–Dirac distribution modeling of income distribution of developing and developed countries: A comparative study of Indonesia, United States, and Singapore","authors":"Muhammad Abraar Abhirama , Teguh Dartanto , Acep Purqon","doi":"10.1016/j.physa.2026.131287","DOIUrl":"10.1016/j.physa.2026.131287","url":null,"abstract":"<div><div>The distribution of income constitutes a fundamental indicator of economic development, forming the empirical basis for assessing inequality through quantitative measures such as the Gini coefficient. As a contribution to the field of econophysics, this study develops an analytical model of income distribution for the Lorenz curve — a mathematical representation of income inequality — based on the Fermi–Dirac (FD) distribution. The purposes of this research are: proving the analogy between household income distribution and particle distribution using the thermodynamics-based and statistical mechanics-based approach, determining the fitness level of the FD distribution in modeling the income distribution, and visualizing the similar distribution behavior between the Lorenz curve and the FD Distribution. We performed comparative analyses along with household income data from countries with diverse economic characteristics, including the large-size developed country (USA), the small-size developed country (Singapore), and the large-size developing country (Indonesia), sourced from each country’s official statistics bureau. Note that the Indonesian household income dataset is proxied by household consumption expenditure, a common practice found in developing countries, raising its own intriguing question of whether the FD modeling would be robust under this proxy approach. The results then show that, regardless of the household income or expenditure data approach, the fitting model effectively fits the household income data within these countries. Through this work, we show that a direct modeling of Lorenz curve by the FD model is possible, and is to be further developed afterwards.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131287"},"PeriodicalIF":3.1,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.physa.2026.131276
Yue Zhang , Ye Ye , Wei Bao , Huai-he Huang , Neng-gang Xie
We propose a Poor-Defect-Rich-Cooperation (PDRC) dynamic strategy and introduce it into a threshold-based multiplayer snowdrift game. A three-strategy framework, namely pure cooperation (C), pure defection (D), and Poor - Defect - Rich - Cooperation (PDRC), is constructed to investigate the dynamics of multi-strategy games and the evolution mechanism of cooperation on hypergraphs. By comparing with a two-strategy system containing only cooperation and defection, simulation results show that the introduction of the PDRC strategy significantly promotes the evolution of cooperation under specific parameters. In the two-strategy system, the defection strategy dominates and severely suppresses cooperation. In contrast, the three-strategy system eventually reaches a dynamic equilibrium dominated by the PDRC strategy, with the average group payoff and average cooperation rate being significantly higher than those of the two-strategy system. As the cooperator threshold decreases, the range of the cost-to-benefit ratio in which the three-strategy system outperforms the two-strategy system in terms of average cooperation level and average group payoff gradually expands. The PDRC strategy enhances the system's adaptability to low-threshold, low cost-to-benefit ratio environments. It can withstand the risk of collapse in high cost-to-benefit scenarios and avoid the complete collapse observed in the two-strategy system. The introduction of the negative cost mechanism does not change the cost-to-benefit ratio interval in which the three-strategy system outperforms the two-strategy system, but it leads to different evolutionary outcomes. The effect of negative cost is non-uniformly enhanced across different thresholds. The three-strategy system performs better at certain thresholds, and the strategy distribution exhibits richer evolutionary dynamics.
{"title":"Dynamics and evolution of cooperation of the multiplayer snowdrift games on hypergraphs","authors":"Yue Zhang , Ye Ye , Wei Bao , Huai-he Huang , Neng-gang Xie","doi":"10.1016/j.physa.2026.131276","DOIUrl":"10.1016/j.physa.2026.131276","url":null,"abstract":"<div><div>We propose a Poor-Defect-Rich-Cooperation (PDRC) dynamic strategy and introduce it into a threshold-based multiplayer snowdrift game. A three-strategy framework, namely pure cooperation (C), pure defection (D), and Poor - Defect - Rich - Cooperation (PDRC), is constructed to investigate the dynamics of multi-strategy games and the evolution mechanism of cooperation on hypergraphs. By comparing with a two-strategy system containing only cooperation and defection, simulation results show that the introduction of the PDRC strategy significantly promotes the evolution of cooperation under specific parameters. In the two-strategy system, the defection strategy dominates and severely suppresses cooperation. In contrast, the three-strategy system eventually reaches a dynamic equilibrium dominated by the PDRC strategy, with the average group payoff and average cooperation rate being significantly higher than those of the two-strategy system. As the cooperator threshold decreases, the range of the cost-to-benefit ratio in which the three-strategy system outperforms the two-strategy system in terms of average cooperation level and average group payoff gradually expands. The PDRC strategy enhances the system's adaptability to low-threshold, low cost-to-benefit ratio environments. It can withstand the risk of collapse in high cost-to-benefit scenarios and avoid the complete collapse observed in the two-strategy system. The introduction of the negative cost mechanism does not change the cost-to-benefit ratio interval in which the three-strategy system outperforms the two-strategy system, but it leads to different evolutionary outcomes. The effect of negative cost is non-uniformly enhanced across different thresholds. The three-strategy system performs better at certain thresholds, and the strategy distribution exhibits richer evolutionary dynamics.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131276"},"PeriodicalIF":3.1,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.physa.2026.131274
Ravitheja Kandalam, Soumyaditya Das, Supravat Dey
At the critical point of a phase transition, the fourth-order Binder cumulant (), which is a measure of departure from Gaussianity in the order parameter, becomes independent of the system size and obeys a finite-size scaling relation with the correlation length exponent. This feature has been widely used to estimate the critical point and exponents accurately from simulations. Recently, the Gini index, a measure of inequality that traditionally is used in economics to quantify wealth inequality, has been shown to be useful for studying phase transitions in physical systems at equilibrium. By studying phase transition in equilibrium, it has been numerically demonstrated that at the critical point, in the order parameter becomes independent of the system size and follows a finite-size scaling relation similar to . Here, we investigate the nonequilibrium phase transition in the Vicsek model of active systems in two dimensions using the index . For this model, we find that both and exhibit similar behavior. For a high self-propelled velocity, both and are system size independent at the critical point and obey a finite-size scaling relation for the correlation length exponent. For small self-propulsion, we find the transition is unusual, as there is no single point where (or ) for various system sizes crosses.
{"title":"The phase-transition in the Vicsek model through Gini index","authors":"Ravitheja Kandalam, Soumyaditya Das, Supravat Dey","doi":"10.1016/j.physa.2026.131274","DOIUrl":"10.1016/j.physa.2026.131274","url":null,"abstract":"<div><div>At the critical point of a phase transition, the fourth-order Binder cumulant (<span><math><mi>U</mi></math></span>), which is a measure of departure from Gaussianity in the order parameter, becomes independent of the system size and obeys a finite-size scaling relation with the correlation length exponent. This feature has been widely used to estimate the critical point and exponents accurately from simulations. Recently, the Gini index, a measure of inequality that traditionally is used in economics to quantify wealth inequality, has been shown to be useful for studying phase transitions in physical systems at equilibrium. By studying phase transition in equilibrium, it has been numerically demonstrated that at the critical point, <span><math><mi>g</mi></math></span> in the order parameter becomes independent of the system size and follows a finite-size scaling relation similar to <span><math><mi>U</mi></math></span>. Here, we investigate the nonequilibrium phase transition in the Vicsek model of active systems in two dimensions using the index <span><math><mi>g</mi></math></span>. For this model, we find that both <span><math><mi>g</mi></math></span> and <span><math><mi>U</mi></math></span> exhibit similar behavior. For a high self-propelled velocity, both <span><math><mi>g</mi></math></span> and <span><math><mi>U</mi></math></span> are system size independent at the critical point and obey a finite-size scaling relation for the correlation length exponent. For small self-propulsion, we find the transition is unusual, as there is no single point where <span><math><mi>g</mi></math></span> (or <span><math><mi>U</mi></math></span>) for various system sizes crosses.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131274"},"PeriodicalIF":3.1,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.physa.2025.131250
George Titakis , Ioannis Papamichail , Iasson Karafyllis , Dionysios Theodosis , Markos Papageorgiou
The present paper provides evidence that formally derived macroscopic models for the description of the traffic flow of vehicles under the effect of cruise controllers are successful even in very complex cases. A macroscopic traffic flow model for automated vehicles is calibrated to fit traffic data collected from the microscopic simulation with a lane-free vehicle movement strategy. The considered macroscopic model was formally derived – using a particle method – from the particular microscopic movement strategy. To evaluate the accuracy of the calibrated macroscopic model, two traffic scenarios are considered. In the first scenario, a highway stretch with an off-ramp and an on-ramp is used, while in the second scenario a funnel-like narrowing of the road is considered. In both cases, it is shown that the calibrated macroscopic model reproduces with high accuracy all the dynamically changing traffic conditions that appear in the microscopically produced traffic data.
{"title":"Calibration of a macroscopic automated-vehicle traffic flow model for lane-free traffic","authors":"George Titakis , Ioannis Papamichail , Iasson Karafyllis , Dionysios Theodosis , Markos Papageorgiou","doi":"10.1016/j.physa.2025.131250","DOIUrl":"10.1016/j.physa.2025.131250","url":null,"abstract":"<div><div>The present paper provides evidence that formally derived macroscopic models for the description of the traffic flow of vehicles under the effect of cruise controllers are successful even in very complex cases. A macroscopic traffic flow model for automated vehicles is calibrated to fit traffic data collected from the microscopic simulation with a lane-free vehicle movement strategy. The considered macroscopic model was formally derived – using a particle method – from the particular microscopic movement strategy. To evaluate the accuracy of the calibrated macroscopic model, two traffic scenarios are considered. In the first scenario, a highway stretch with an off-ramp and an on-ramp is used, while in the second scenario a funnel-like narrowing of the road is considered. In both cases, it is shown that the calibrated macroscopic model reproduces with high accuracy all the dynamically changing traffic conditions that appear in the microscopically produced traffic data.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131250"},"PeriodicalIF":3.1,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.physa.2026.131268
Avery Drennan , Jeffrey Covington , Dan Han , Andrew Attilio , Jaechoul Lee , Richard Posner , Eck Doerry , Joseph Mihaljevic , Ye Chen
Compartmental models are effective in modeling the spread of infectious pathogens, but have remaining weaknesses in fitting to real datasets exhibiting stochastic effects. We propose a stochastic SIHR model with a dynamic transmission rate, where the rate is modeled by the Black–Karasinski (BK) process — a mean-reverting stochastic process with a stable equilibrium distribution, making it well-suited for modeling long-term epidemic dynamics. To generate sample paths of the BK process and estimate static parameters of the system, we employ particle Markov Chain Monte Carlo (pMCMC) methods due to their effectiveness in handling complex state-space models and jointly estimating parameters. We designed experiments on synthetic data to assess estimation accuracy and its impact on inferred transmission rates; all BK-process parameters were estimated accurately except the mean-reverting rate. We also assess the sensitivity of pMCMC to misspecification of the mean-reversion rate. Our results show that estimation accuracy remains stable across different mean-reversion rates, though smaller values increase error variance and complicate inference results. Finally, we apply our model to Arizona flu hospitalization data, finding that parameter estimates are consistent with published survey data.
{"title":"Estimating dynamic transmission rates with a Black–Karasinski process in stochastic SIHR models using particle MCMC","authors":"Avery Drennan , Jeffrey Covington , Dan Han , Andrew Attilio , Jaechoul Lee , Richard Posner , Eck Doerry , Joseph Mihaljevic , Ye Chen","doi":"10.1016/j.physa.2026.131268","DOIUrl":"10.1016/j.physa.2026.131268","url":null,"abstract":"<div><div>Compartmental models are effective in modeling the spread of infectious pathogens, but have remaining weaknesses in fitting to real datasets exhibiting stochastic effects. We propose a stochastic SIHR model with a dynamic transmission rate, where the rate is modeled by the Black–Karasinski (BK) process — a mean-reverting stochastic process with a stable equilibrium distribution, making it well-suited for modeling long-term epidemic dynamics. To generate sample paths of the BK process and estimate static parameters of the system, we employ particle Markov Chain Monte Carlo (pMCMC) methods due to their effectiveness in handling complex state-space models and jointly estimating parameters. We designed experiments on synthetic data to assess estimation accuracy and its impact on inferred transmission rates; all BK-process parameters were estimated accurately except the mean-reverting rate. We also assess the sensitivity of pMCMC to misspecification of the mean-reversion rate. Our results show that estimation accuracy remains stable across different mean-reversion rates, though smaller values increase error variance and complicate inference results. Finally, we apply our model to Arizona flu hospitalization data, finding that parameter estimates are consistent with published survey data.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"684 ","pages":"Article 131268"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.physa.2026.131273
Hongliu Li, Jacqueline Tsz Yin Lo
With the growing aging population, ensuring effective and safe evacuation of older people in nursing homes has become a pressing concern in emergency planning. This study presents an enhanced agent-based evacuation model that explicitly incorporates caregiver-assisted movement dynamics, intelligent pathfinding, and collision avoidance to simulate evacuation processes in nursing homes. The model is applied to evaluate the caregiver-to-resident ratio’s influence on the evacuation efficiency of residents with different distributions. The results demonstrate that increasing the caregiver-to-resident ratio does not positively affect evacuation efficiency and may even hinder it in complex building layouts. When caregivers are rationally allocated, evacuation time in scenarios involving wheelchair users can be shorter than in those without wheelchairs, due to more coordinated and assisted movements. These findings provide valuable insights for optimizing the evacuation strategies of nursing homes and emphasize that the allocation of caregivers needs to consider the locations of residents and their mobility capabilities.
{"title":"Impact of the caregiver-to-resident ratio on evacuation efficiency in nursing homes: A simulation-based study","authors":"Hongliu Li, Jacqueline Tsz Yin Lo","doi":"10.1016/j.physa.2026.131273","DOIUrl":"10.1016/j.physa.2026.131273","url":null,"abstract":"<div><div>With the growing aging population, ensuring effective and safe evacuation of older people in nursing homes has become a pressing concern in emergency planning. This study presents an enhanced agent-based evacuation model that explicitly incorporates caregiver-assisted movement dynamics, intelligent pathfinding, and collision avoidance to simulate evacuation processes in nursing homes. The model is applied to evaluate the caregiver-to-resident ratio’s influence on the evacuation efficiency of residents with different distributions. The results demonstrate that increasing the caregiver-to-resident ratio does not positively affect evacuation efficiency and may even hinder it in complex building layouts. When caregivers are rationally allocated, evacuation time in scenarios involving wheelchair users can be shorter than in those without wheelchairs, due to more coordinated and assisted movements. These findings provide valuable insights for optimizing the evacuation strategies of nursing homes and emphasize that the allocation of caregivers needs to consider the locations of residents and their mobility capabilities.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"685 ","pages":"Article 131273"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.physa.2026.131265
Huanning Yang , Ping Wang , Yunfeng Hu , Ting Qu , Zihan Li
This study focuses on the motion control problem of autonomous vehicles operating on low-friction surfaces under nonlinear disturbances. Under such conditions, the vehicle system is highly prone to instability. Thus, stability analysis and control study need to be carried out. Based on Lyapunov stability theory and incorporating tangent direction difference information, a predictive dynamic stability envelope region is constructed, which, for the first time, reveals relationship among vehicle instability risk, velocity, road adhesion coefficient, and curvature. This predictive region provides a theoretical foundation for proactive controller intervention to prevent vehicle instability. To mitigate the effects of unknown nonlinear disturbances, an adaptive disturbance-rejection controller is developed within the nonlinear model predictive control framework. Specifically, a Random Projection Neural Network-based error estimator is employed, whose weights are continuously updated by the designed adaptive weight update law, enabling real-time compensation of uncertainties in the control loop. Owing to the designed adaptive update law, the proposed estimator effectively reduces the adverse impact of model uncertainties while ensuring closed-loop stability, thereby improving trajectory tracking performance. To further ensure real-time requirement, a numerical solution algorithm is proposed, which integrates the continuation method with the Nelder–Mead method. This approach enhances computational efficiency by reducing the dimensionality of the optimization variables. Finally, the proposed controller is validated through both simulations and experiments. The results demonstrate that the controller can maintain vehicle stability and decrease tracking errors while satisfying the real-time requirements.
{"title":"Stability analysis and motion control of autonomous vehicles under nonlinear disturbances on low-friction surfaces","authors":"Huanning Yang , Ping Wang , Yunfeng Hu , Ting Qu , Zihan Li","doi":"10.1016/j.physa.2026.131265","DOIUrl":"10.1016/j.physa.2026.131265","url":null,"abstract":"<div><div>This study focuses on the motion control problem of autonomous vehicles operating on low-friction surfaces under nonlinear disturbances. Under such conditions, the vehicle system is highly prone to instability. Thus, stability analysis and control study need to be carried out. Based on Lyapunov stability theory and incorporating tangent direction difference information, a predictive dynamic stability envelope region is constructed, which, for the first time, reveals relationship among vehicle instability risk, velocity, road adhesion coefficient, and curvature. This predictive region provides a theoretical foundation for proactive controller intervention to prevent vehicle instability. To mitigate the effects of unknown nonlinear disturbances, an adaptive disturbance-rejection controller is developed within the nonlinear model predictive control framework. Specifically, a Random Projection Neural Network-based error estimator is employed, whose weights are continuously updated by the designed adaptive weight update law, enabling real-time compensation of uncertainties in the control loop. Owing to the designed adaptive update law, the proposed estimator effectively reduces the adverse impact of model uncertainties while ensuring closed-loop stability, thereby improving trajectory tracking performance. To further ensure real-time requirement, a numerical solution algorithm is proposed, which integrates the continuation method with the Nelder–Mead method. This approach enhances computational efficiency by reducing the dimensionality of the optimization variables. Finally, the proposed controller is validated through both simulations and experiments. The results demonstrate that the controller can maintain vehicle stability and decrease tracking errors while satisfying the real-time requirements.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"684 ","pages":"Article 131265"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.physa.2026.131270
Carlos Uriarte , Pablo Rodriguez-Lopez , Nagi Khalil
A continuous-opinion model accounting for the social compromise propensity is theoretically and numerically analysed. An agent’s opinion is represented by a real number that can be changed through social interactions with her neighbours. The proposed dynamics depends on two fundamental parameters, and . If an interaction takes place between two agents, their relative opinions decreases an amount given by . The probability of two neighbours to interact is proportional to the -power of their relative opinions. We unveil the behaviour of the system for all physical relevant values of the parameters and several representative interaction networks. When and , the system always reaches consensus, with all agents having the mean initial opinion, provided the interaction network is connected. The approach to consensus can be characterized by means of the mean opinion and the temperature (or opinion dispersion) of each agent. Three scenarios have been identified. When the agents are well mixed, as with all-to-all interactions, a pre-consensus regime is seen, with all agents having zero mean opinion and the same temperature, following the Haff’s law of granular gases. A similar regime is observed with Erdös–Rényi and Barabási–Albert networks: mean opinions are zero but agents with different degrees have different temperatures, though still following the Haff’s law. Finally, the case of a square 2D lattice has been carefully analysed, by starting from the derivation of closed set of hydrodynamic-like equations using the Chapman–Enskog method. For larger than a critical value, that depends on the system size, the system keeps spatially homogeneous, with zero mean opinions and equal temperatures, as they approach consensus. Below this critical line, the system explores states with spatially non-homogeneous configurations that evolve in time. Numerically, it is found that the main role of is to change the local structure of the spacial opinion dispersion: while for small enough the system keeps locally isotropic, as increases, neighbouring agents with similar opinions tend to form local lineal structures.
{"title":"Consensus as cooling: A granular gas model for continuous opinions on structured networks","authors":"Carlos Uriarte , Pablo Rodriguez-Lopez , Nagi Khalil","doi":"10.1016/j.physa.2026.131270","DOIUrl":"10.1016/j.physa.2026.131270","url":null,"abstract":"<div><div>A continuous-opinion model accounting for the social compromise propensity is theoretically and numerically analysed. An agent’s opinion is represented by a real number that can be changed through social interactions with her neighbours. The proposed dynamics depends on two fundamental parameters, <span><math><mrow><mi>α</mi><mo>∈</mo><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></mrow></math></span> and <span><math><mrow><mi>β</mi><mo>≥</mo><mn>0</mn></mrow></math></span>. If an interaction takes place between two agents, their relative opinions decreases an amount given by <span><math><mi>α</mi></math></span>. The probability of two neighbours to interact is proportional to the <span><math><mi>β</mi></math></span>-power of their relative opinions. We unveil the behaviour of the system for all physical relevant values of the parameters and several representative interaction networks. When <span><math><mrow><mi>α</mi><mo>∈</mo><mrow><mo>(</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>)</mo></mrow></mrow></math></span> and <span><math><mrow><mi>β</mi><mo>≥</mo><mn>0</mn></mrow></math></span>, the system always reaches consensus, with all agents having the mean initial opinion, provided the interaction network is connected. The approach to consensus can be characterized by means of the mean opinion and the temperature (or opinion dispersion) of each agent. Three scenarios have been identified. When the agents are well mixed, as with all-to-all interactions, a pre-consensus regime is seen, with all agents having zero mean opinion and the same temperature, following the Haff’s law of granular gases. A similar regime is observed with Erdös–Rényi and Barabási–Albert networks: mean opinions are zero but agents with different degrees have different temperatures, though still following the Haff’s law. Finally, the case of a square 2D lattice has been carefully analysed, by starting from the derivation of closed set of hydrodynamic-like equations using the Chapman–Enskog method. For <span><math><mi>α</mi></math></span> larger than a critical value, that depends on the system size, the system keeps spatially homogeneous, with zero mean opinions and equal temperatures, as they approach consensus. Below this critical line, the system explores states with spatially non-homogeneous configurations that evolve in time. Numerically, it is found that the main role of <span><math><mi>β</mi></math></span> is to change the local structure of the spacial opinion dispersion: while for <span><math><mi>β</mi></math></span> small enough the system keeps locally isotropic, as <span><math><mi>β</mi></math></span> increases, neighbouring agents with similar opinions tend to form local lineal structures.</div></div>","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"684 ","pages":"Article 131270"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.physa.2025.131260
Madhur Mangalam , Eiichi Watanabe , Ken Kiyono
<div><div>Multifractality is a hallmark of healthy physiological dynamics, reflecting structured variability across timescales and fluctuation intensities. Yet the generative mechanisms that sustain robust multifractal scaling — particularly across both weak and strong fluctuations — remain poorly understood, limiting our ability to build physiologically plausible models. We systematically compared how additive, multiplicative, and additomultiplicative cascade processes preserve multifractal structure under controlled conditions. Crucially, we introduced the coefficient of determination profile, <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span>, as a diagnostic measure that reveals the stability of scaling laws across moment orders <span><math><mrow><mi>q</mi><mspace></mspace><mo>∈</mo><mspace></mspace><mrow><mo>[</mo><mo>−</mo><mn>10</mn><mo>,</mo><mn>10</mn><mo>]</mo></mrow></mrow></math></span>, distinguishing reliable multifractality in weak (<span><math><mrow><mi>q</mi><mspace></mspace><mo><</mo><mspace></mspace><mn>0</mn></mrow></math></span>) versus strong (<span><math><mrow><mi>q</mi><mspace></mspace><mo>></mo><mspace></mspace><mn>0</mn></mrow></math></span>) fluctuations. Using synthetic time series from each cascade type across seven generational depths and five noise environments (white, fractal, defractalizing, fractalizing, and mixed), we computed multifractal spectra <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>α</mi><mo>)</mo></mrow></mrow></math></span> and mapped <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span> to assess scaling fidelity. Heatmaps revealed that additive cascades exhibited reliable scaling only for strong fluctuations, while multiplicative cascades maintained structure only near moderate <span><math><mi>q</mi></math></span>. In stark contrast, additomultiplicative cascades sustained consistently high <span><math><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> across the full <span><math><mi>q</mi></math></span>-range and all noise conditions, demonstrating robust, symmetric multifractal scaling. This performance proved remarkably insensitive to noise type, confirming that cascade structure — not input statistics — determines scaling reliability. Only hybrid cascades captured both the fine-grained regulatory control and burst-like adaptations characteristic of physiological processes. An illustrative analysis of heart rate variability data provided preliminary evidence that such additomultiplicative structures may constitute a plausible generative mechanism for biological multifractality. Our findings establish the <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span> profile as a powerful diagnostic for multifra
{"title":"Additomultiplicative cascades sustain multifractal reliability across fluctuation intensities","authors":"Madhur Mangalam , Eiichi Watanabe , Ken Kiyono","doi":"10.1016/j.physa.2025.131260","DOIUrl":"10.1016/j.physa.2025.131260","url":null,"abstract":"<div><div>Multifractality is a hallmark of healthy physiological dynamics, reflecting structured variability across timescales and fluctuation intensities. Yet the generative mechanisms that sustain robust multifractal scaling — particularly across both weak and strong fluctuations — remain poorly understood, limiting our ability to build physiologically plausible models. We systematically compared how additive, multiplicative, and additomultiplicative cascade processes preserve multifractal structure under controlled conditions. Crucially, we introduced the coefficient of determination profile, <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span>, as a diagnostic measure that reveals the stability of scaling laws across moment orders <span><math><mrow><mi>q</mi><mspace></mspace><mo>∈</mo><mspace></mspace><mrow><mo>[</mo><mo>−</mo><mn>10</mn><mo>,</mo><mn>10</mn><mo>]</mo></mrow></mrow></math></span>, distinguishing reliable multifractality in weak (<span><math><mrow><mi>q</mi><mspace></mspace><mo><</mo><mspace></mspace><mn>0</mn></mrow></math></span>) versus strong (<span><math><mrow><mi>q</mi><mspace></mspace><mo>></mo><mspace></mspace><mn>0</mn></mrow></math></span>) fluctuations. Using synthetic time series from each cascade type across seven generational depths and five noise environments (white, fractal, defractalizing, fractalizing, and mixed), we computed multifractal spectra <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>α</mi><mo>)</mo></mrow></mrow></math></span> and mapped <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span> to assess scaling fidelity. Heatmaps revealed that additive cascades exhibited reliable scaling only for strong fluctuations, while multiplicative cascades maintained structure only near moderate <span><math><mi>q</mi></math></span>. In stark contrast, additomultiplicative cascades sustained consistently high <span><math><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> across the full <span><math><mi>q</mi></math></span>-range and all noise conditions, demonstrating robust, symmetric multifractal scaling. This performance proved remarkably insensitive to noise type, confirming that cascade structure — not input statistics — determines scaling reliability. Only hybrid cascades captured both the fine-grained regulatory control and burst-like adaptations characteristic of physiological processes. An illustrative analysis of heart rate variability data provided preliminary evidence that such additomultiplicative structures may constitute a plausible generative mechanism for biological multifractality. Our findings establish the <span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>q</mi><mo>)</mo></mrow></mrow></math></span> profile as a powerful diagnostic for multifra","PeriodicalId":20152,"journal":{"name":"Physica A: Statistical Mechanics and its Applications","volume":"684 ","pages":"Article 131260"},"PeriodicalIF":3.1,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}