Usually applied simulation methods for turbulent flows as large eddy simulation (LES), wall-modeled LES (WMLES), and detached eddy simulation (DES) face significant challenges: they are characterized by improper resolution variations and essential practical simulation problems given by huge computational cost, imbalanced resolution transitions, and resolution mismatch. Alternative simulation methods are described here. By using an extremal entropy analysis, it is shown how minimal error simulation methods can be designed. It is shown that these methods can overcome the typical shortcomings of usually applied simulation methods. A crucial ingredient of this analysis is the identification of a mathematically implied general hybridization mechanism, which is missing in existing methods. Applications to several complex high Reynolds number flow simulations reveal essential performance, functionality, and computational cost advantages of minimal error simulation methods.
{"title":"Physically Consistent Resolving Simulations of Turbulent Flows.","authors":"Stefan Heinz","doi":"10.3390/e26121044","DOIUrl":"10.3390/e26121044","url":null,"abstract":"<p><p>Usually applied simulation methods for turbulent flows as large eddy simulation (LES), wall-modeled LES (WMLES), and detached eddy simulation (DES) face significant challenges: they are characterized by improper resolution variations and essential practical simulation problems given by huge computational cost, imbalanced resolution transitions, and resolution mismatch. Alternative simulation methods are described here. By using an extremal entropy analysis, it is shown how minimal error simulation methods can be designed. It is shown that these methods can overcome the typical shortcomings of usually applied simulation methods. A crucial ingredient of this analysis is the identification of a mathematically implied general hybridization mechanism, which is missing in existing methods. Applications to several complex high Reynolds number flow simulations reveal essential performance, functionality, and computational cost advantages of minimal error simulation methods.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corrosion of soft magnetic materials during service can significantly impact their performance and service life, therefore it is important to improve their corrosion resistance. In this paper, the corrosion resistance, alternating current soft magnetic properties (AC SMPs) and microstructure of FeCoNixAl (x = 1.0-2.0) medium-entropy alloys (MEAs) were studied. Corrosion resistance is greatly improved with an increase in Ni content. The x = 2.0 alloy has the lowest corrosion current density (Icorr = 2.67 × 10-7 A/cm2), which is reduced by 71% compared to the x = 1.0 alloy. Increasing the Ni content can improve the AC SMPs of the alloy. When x = 1.75, the total loss (Ps) is improved by 6% compared to the x = 1.0 alloy. X-ray diffraction (XRD) and scanning electron microscopy (SEM) show that the increase in Ni content is beneficial for promoting the formation of the face-centered-cubic (FCC) phase, and the body-centered-cubic (BCC) phase is gradually divided by the FCC phase. Electron backscatter diffraction (EBSD) shows that, with the increase in Ni content, the number of grain boundaries in the alloy is greatly reduced and numerous phase boundaries appear in the alloys. The degree of strain concentration is significantly reduced with the increasing Ni content. The corrosion mechanism of alloys is also discussed in this paper. Our study provides a method to balance the soft magnetic properties and corrosion resistance, paving the way for potential applications of Fe-Co-Ni-Al MEAs in corrosive environments.
{"title":"Tuning Corrosion Resistance and AC Soft Magnetic Properties of Fe-Co-Ni-Al Medium-Entropy Alloy via Ni Content.","authors":"Wenfeng Peng, Yubing Xia, Hui Xu, Xiaohua Tan","doi":"10.3390/e26121038","DOIUrl":"10.3390/e26121038","url":null,"abstract":"<p><p>Corrosion of soft magnetic materials during service can significantly impact their performance and service life, therefore it is important to improve their corrosion resistance. In this paper, the corrosion resistance, alternating current soft magnetic properties (AC SMPs) and microstructure of FeCoNi<i><sub>x</sub></i>Al (<i>x</i> = 1.0-2.0) medium-entropy alloys (MEAs) were studied. Corrosion resistance is greatly improved with an increase in Ni content. The <i>x</i> = 2.0 alloy has the lowest corrosion current density (<i>I<sub>corr</sub></i> = 2.67 × 10<sup>-7</sup> A/cm<sup>2</sup>), which is reduced by 71% compared to the <i>x</i> = 1.0 alloy. Increasing the Ni content can improve the AC SMPs of the alloy. When <i>x</i> = 1.75, the total loss (<i>P<sub>s</sub></i>) is improved by 6% compared to the <i>x</i> = 1.0 alloy. X-ray diffraction (XRD) and scanning electron microscopy (SEM) show that the increase in Ni content is beneficial for promoting the formation of the face-centered-cubic (FCC) phase, and the body-centered-cubic (BCC) phase is gradually divided by the FCC phase. Electron backscatter diffraction (EBSD) shows that, with the increase in Ni content, the number of grain boundaries in the alloy is greatly reduced and numerous phase boundaries appear in the alloys. The degree of strain concentration is significantly reduced with the increasing Ni content. The corrosion mechanism of alloys is also discussed in this paper. Our study provides a method to balance the soft magnetic properties and corrosion resistance, paving the way for potential applications of Fe-Co-Ni-Al MEAs in corrosive environments.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Einstein equation in a semiclassical approximation is applied to a spherical region of the universe, with the stress-energy tensor consisting of the mass density and pressure of the ΛCDM cosmological model plus an additional contribution due to the quantum vacuum. Expanding the equation in powers of Newton constant G, the vacuum contributes to second order. The result is that at least a part of the acceleration in the expansion of the universe may be due to the quantum vacuum fluctuations.
{"title":"Effects of the Quantum Vacuum at a Cosmic Scale and of Dark Energy.","authors":"Emilio Santos","doi":"10.3390/e26121042","DOIUrl":"10.3390/e26121042","url":null,"abstract":"<p><p>The Einstein equation in a semiclassical approximation is applied to a spherical region of the universe, with the stress-energy tensor consisting of the mass density and pressure of the ΛCDM cosmological model plus an additional contribution due to the quantum vacuum. Expanding the equation in powers of Newton constant G, the vacuum contributes to second order. The result is that at least a part of the acceleration in the expansion of the universe may be due to the quantum vacuum fluctuations.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727622/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complexity is a key measure of driving scenario significance for scenario-based autonomous driving tests. However, current methods for quantifying scenario complexity primarily focus on static scenes rather than dynamic scenarios and fail to represent the dynamic evolution of scenarios. Autonomous vehicle performance may vary significantly across scenarios with different dynamic changes. This paper proposes the Dynamic Scenario Complexity Quantification (DSCQ) method for autonomous driving, which integrates the effects of the environment, road conditions, and dynamic entities in traffic on complexity. Additionally, it introduces Dynamic Effect Entropy to measure uncertainty arising from scenario evolution. Using the real-world DENSE dataset, we demonstrate that the proposed method more accurately quantifies real scenario complexity with dynamic evolution. Although certain scenes may appear less complex, their significant dynamic changes over time are captured by our proposed method but overlooked by conventional approaches. The correlation between scenario complexity and object detection algorithm performance further proves the effectiveness of the method. DSCQ quantifies driving scenario complexity across both spatial and temporal scales, filling the gap of existing methods that only consider spatial complexity. This approach shows the potential to enhance AV safety testing efficiency in varied and evolving scenarios.
{"title":"Complexity Quantification of Driving Scenarios with Dynamic Evolution Characteristics.","authors":"Tianyue Liu, Cong Wang, Ziqiao Yin, Zhilong Mi, Xiya Xiong, Binghui Guo","doi":"10.3390/e26121033","DOIUrl":"10.3390/e26121033","url":null,"abstract":"<p><p>Complexity is a key measure of driving scenario significance for scenario-based autonomous driving tests. However, current methods for quantifying scenario complexity primarily focus on static scenes rather than dynamic scenarios and fail to represent the dynamic evolution of scenarios. Autonomous vehicle performance may vary significantly across scenarios with different dynamic changes. This paper proposes the Dynamic Scenario Complexity Quantification (DSCQ) method for autonomous driving, which integrates the effects of the environment, road conditions, and dynamic entities in traffic on complexity. Additionally, it introduces Dynamic Effect Entropy to measure uncertainty arising from scenario evolution. Using the real-world DENSE dataset, we demonstrate that the proposed method more accurately quantifies real scenario complexity with dynamic evolution. Although certain scenes may appear less complex, their significant dynamic changes over time are captured by our proposed method but overlooked by conventional approaches. The correlation between scenario complexity and object detection algorithm performance further proves the effectiveness of the method. DSCQ quantifies driving scenario complexity across both spatial and temporal scales, filling the gap of existing methods that only consider spatial complexity. This approach shows the potential to enhance AV safety testing efficiency in varied and evolving scenarios.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726841/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Podgorelec, Damjan Strnad, Ivana Kolingerová, Borut Žalik
After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless-lossy techniques, and encourages the development of new data compression algorithms.
{"title":"State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.","authors":"David Podgorelec, Damjan Strnad, Ivana Kolingerová, Borut Žalik","doi":"10.3390/e26121032","DOIUrl":"10.3390/e26121032","url":null,"abstract":"<p><p>After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless-lossy techniques, and encourages the development of new data compression algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726981/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we explore the impact of stochastic resetting on the dynamics of random walks on a T-fractal network. By employing the generating function technique, we establish a recursive relation between the generating function of the first passage time (FPT) and derive the relationship between the mean first passage time (MFPT) with resetting and the generating function of the FPT without resetting. Our analysis covers various scenarios for a random walker reaching a target site from the starting position; for each case, we determine the optimal resetting probability γ* that minimizes the MFPT. We compare the results with the MFPT without resetting and find that the inclusion of resetting significantly enhances the search efficiency, particularly as the size of the network increases. Our findings highlight the potential of stochastic resetting as an effective strategy for the optimization of search processes in complex networks, offering valuable insights for applications in various fields in which efficient search strategies are crucial.
{"title":"Random Walk on T-Fractal with Stochastic Resetting.","authors":"Xiaohan Sun, Anlin Li, Shaoxiang Zhu, Feng Zhu","doi":"10.3390/e26121034","DOIUrl":"10.3390/e26121034","url":null,"abstract":"<p><p>In this study, we explore the impact of stochastic resetting on the dynamics of random walks on a T-fractal network. By employing the generating function technique, we establish a recursive relation between the generating function of the first passage time (FPT) and derive the relationship between the mean first passage time (MFPT) with resetting and the generating function of the FPT without resetting. Our analysis covers various scenarios for a random walker reaching a target site from the starting position; for each case, we determine the optimal resetting probability γ* that minimizes the MFPT. We compare the results with the MFPT without resetting and find that the inclusion of resetting significantly enhances the search efficiency, particularly as the size of the network increases. Our findings highlight the potential of stochastic resetting as an effective strategy for the optimization of search processes in complex networks, offering valuable insights for applications in various fields in which efficient search strategies are crucial.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rongrong Lu, Miao Xu, Chengjiang Zhou, Zhaodong Zhang, Kairong Tan, Yuhuan Sun, Yuran Wang, Min Mao
Rolling bearings, as critical components of rotating machinery, significantly influence equipment reliability and operational efficiency. Accurate fault diagnosis is therefore crucial for maintaining industrial production safety and continuity. This paper presents a new fault diagnosis method based on FCEEMD multi-complexity low-dimensional features and directed acyclic graph LSTSVM. The Fast Complementary Ensemble Empirical Mode Decomposition (FCEEMD) method is applied to decompose vibration signals, effectively reducing background noise. Nonlinear complexity features are then extracted, including sample entropy (SE), permutation entropy (PE), dispersion entropy (DE), Gini coefficient, the square envelope Gini coefficient (SEGI), and the square envelope spectral Gini coefficient (SESGI), enhancing the capture of the signal complexity. In addition, 16 time-domain and 13 frequency-domain features are used to characterize the signal, forming a high-dimensional feature matrix. Robust unsupervised feature selection with local preservation (RULSP) is employed to identify low-dimensional sensitive features. Finally, a multi-classifier based on DAG LSTSVM is constructed using the directed acyclic graph (DAG) strategy, improving fault diagnosis precision. Experiments on both laboratory bearing faults and industrial check valve faults demonstrate nearly 100% diagnostic accuracy, highlighting the method's effectiveness and potential.
{"title":"A Novel Fault Diagnosis Method Using FCEEMD-Based Multi-Complexity Low-Dimensional Features and Directed Acyclic Graph LSTSVM.","authors":"Rongrong Lu, Miao Xu, Chengjiang Zhou, Zhaodong Zhang, Kairong Tan, Yuhuan Sun, Yuran Wang, Min Mao","doi":"10.3390/e26121031","DOIUrl":"10.3390/e26121031","url":null,"abstract":"<p><p>Rolling bearings, as critical components of rotating machinery, significantly influence equipment reliability and operational efficiency. Accurate fault diagnosis is therefore crucial for maintaining industrial production safety and continuity. This paper presents a new fault diagnosis method based on FCEEMD multi-complexity low-dimensional features and directed acyclic graph LSTSVM. The Fast Complementary Ensemble Empirical Mode Decomposition (FCEEMD) method is applied to decompose vibration signals, effectively reducing background noise. Nonlinear complexity features are then extracted, including sample entropy (SE), permutation entropy (PE), dispersion entropy (DE), Gini coefficient, the square envelope Gini coefficient (SEGI), and the square envelope spectral Gini coefficient (SESGI), enhancing the capture of the signal complexity. In addition, 16 time-domain and 13 frequency-domain features are used to characterize the signal, forming a high-dimensional feature matrix. Robust unsupervised feature selection with local preservation (RULSP) is employed to identify low-dimensional sensitive features. Finally, a multi-classifier based on DAG LSTSVM is constructed using the directed acyclic graph (DAG) strategy, improving fault diagnosis precision. Experiments on both laboratory bearing faults and industrial check valve faults demonstrate nearly 100% diagnostic accuracy, highlighting the method's effectiveness and potential.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727493/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional maneuver decision-making approaches are highly dependent on accurate and complete situation information, and their decision-making quality becomes poor when opponent information is occasionally missing in complex electromagnetic environments. In order to solve this problem, an autonomous maneuver decision-making approach is developed based on deep reinforcement learning (DRL) architecture. Meanwhile, a Transformer network is integrated into the actor and critic networks, which can find the potential dependency relationships among the time series trajectory data. By using these relationships, the information loss is partially compensated, which leads to maneuvering decisions being more accurate. The issues of limited experience samples, low sampling efficiency, and poor stability in the agent training state appear when the Transformer network is introduced into DRL. To address these issues, the measures of designing an effective decision-making reward, a prioritized sampling method, and a dynamic learning rate adjustment mechanism are proposed. Numerous simulation results show that the proposed approach outperforms the traditional DRL algorithms, with a higher win rate in the case of opponent information loss.
{"title":"An Intelligent Maneuver Decision-Making Approach for Air Combat Based on Deep Reinforcement Learning and Transformer Networks.","authors":"Wentao Li, Feng Fang, Dongliang Peng, Shuning Han","doi":"10.3390/e26121036","DOIUrl":"10.3390/e26121036","url":null,"abstract":"<p><p>The traditional maneuver decision-making approaches are highly dependent on accurate and complete situation information, and their decision-making quality becomes poor when opponent information is occasionally missing in complex electromagnetic environments. In order to solve this problem, an autonomous maneuver decision-making approach is developed based on deep reinforcement learning (DRL) architecture. Meanwhile, a Transformer network is integrated into the actor and critic networks, which can find the potential dependency relationships among the time series trajectory data. By using these relationships, the information loss is partially compensated, which leads to maneuvering decisions being more accurate. The issues of limited experience samples, low sampling efficiency, and poor stability in the agent training state appear when the Transformer network is introduced into DRL. To address these issues, the measures of designing an effective decision-making reward, a prioritized sampling method, and a dynamic learning rate adjustment mechanism are proposed. Numerous simulation results show that the proposed approach outperforms the traditional DRL algorithms, with a higher win rate in the case of opponent information loss.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate that at the rim of the photon sphere of a black hole, the quantum statistics transition takes place in any multi-particle system of indistinguishable particles, which passes through this rim to the inside. The related local departure from Pauli exclusion principle restriction causes a decay of the internal structure of collective fermionic systems, including the collapse of Fermi spheres in compressed matter. The Fermi sphere decay is associated with the emission of electromagnetic radiation, taking away the energy and entropy of the falling matter without unitarity violation. The spectrum and timing of the related e-m radiation agree with some observed short giant gamma-ray bursts and X-ray components of the luminosity of quasars and of short transients powered by black holes. The release of energy and entropy when passing the photon sphere rim of a black hole significantly modifies the premises of the information paradox at the falling of matter into a black hole.
{"title":"Modification of Premises for the Black Hole Information Paradox Caused by Topological Constraints in the Event Horizon Vicinity.","authors":"Janusz Edward Jacak","doi":"10.3390/e26121035","DOIUrl":"10.3390/e26121035","url":null,"abstract":"<p><p>We demonstrate that at the rim of the photon sphere of a black hole, the quantum statistics transition takes place in any multi-particle system of indistinguishable particles, which passes through this rim to the inside. The related local departure from Pauli exclusion principle restriction causes a decay of the internal structure of collective fermionic systems, including the collapse of Fermi spheres in compressed matter. The Fermi sphere decay is associated with the emission of electromagnetic radiation, taking away the energy and entropy of the falling matter without unitarity violation. The spectrum and timing of the related e-m radiation agree with some observed short giant gamma-ray bursts and X-ray components of the luminosity of quasars and of short transients powered by black holes. The release of energy and entropy when passing the photon sphere rim of a black hole significantly modifies the premises of the information paradox at the falling of matter into a black hole.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Determining causal inference has become popular in physical and engineering applications. While the problem has immense challenges, it provides a way to model the complex networks by observing the time series. In this paper, we present the optimal conditional correlation dimensional geometric information flow principle (oGeoC) that can reveal direct and indirect causal relations in a network through geometric interpretations. We introduce two algorithms that utilize the oGeoC principle to discover the direct links and then remove indirect links. The algorithms are evaluated using coupled logistic networks. The results indicate that when the number of observations is sufficient, the proposed algorithms are highly accurate in identifying direct causal links and have a low false positive rate.
{"title":"Fractal Conditional Correlation Dimension Infers Complex Causal Networks.","authors":"Özge Canlı Usta, Erik M Bollt","doi":"10.3390/e26121030","DOIUrl":"10.3390/e26121030","url":null,"abstract":"<p><p>Determining causal inference has become popular in physical and engineering applications. While the problem has immense challenges, it provides a way to model the complex networks by observing the time series. In this paper, we present the optimal conditional correlation dimensional geometric information flow principle (oGeoC) that can reveal direct and indirect causal relations in a network through geometric interpretations. We introduce two algorithms that utilize the oGeoC principle to discover the direct links and then remove indirect links. The algorithms are evaluated using coupled logistic networks. The results indicate that when the number of observations is sufficient, the proposed algorithms are highly accurate in identifying direct causal links and have a low false positive rate.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142978063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}