Runze Zhang, Debashish Sur, Kangming Li, Julia Witt, Robert Black, Alexander Whittingham, John R. Scully, Jason Hattrick-Simpers
Electrochemical Impedance Spectroscopy (EIS) is a crucial technique for assessing corrosion of a metallic materials. The analysis of EIS hinges on the selection of an appropriate equivalent circuit model (ECM) that accurately characterizes the system under study. In this work, we systematically examined the applicability of three commonly used ECMs across several typical material degradation scenarios. By applying Bayesian Inference to simulated corrosion EIS data, we assessed the suitability of these ECMs under different corrosion conditions and identified regions where the EIS data lacks sufficient information to statistically substantiate the ECM structure. Additionally, we posit that the traditional approach to EIS analysis, which often requires measurements to very low frequencies, might not be always necessary to correctly model the appropriate ECM. Our study assesses the impact of omitting data from low to medium-frequency ranges on inference results and reveals that a significant portion of low-frequency measurements can be excluded without substantially compromising the accuracy of extracting system parameters. Further, we propose simple checks to the posterior distributions of the ECM components and posterior predictions, which can be used to quantitatively evaluate the suitability of a particular ECM and the minimum frequency required to be measured. This framework points to a pathway for expediting EIS acquisition by intelligently reducing low-frequency data collection and permitting on-the-fly EIS measurements
电化学阻抗光谱法(EIS)是评估金属材料腐蚀情况的一项重要技术。对 EIS 的分析取决于选择一个合适的等效电路模型 (ECM),以准确描述所研究系统的特征。在这项工作中,我们系统地研究了三种常用 ECM 在几种典型材料降解情况下的适用性。通过将贝叶斯推理应用于模拟腐蚀 EIS 数据,我们评估了这些 ECM 在不同腐蚀条件下的适用性,并确定了 EIS 数据缺乏足够信息的区域,以便从统计学角度证实 ECM 结构。此外,我们还发现,传统的 EIS 分析方法通常需要测量非常低的频率,但并不总是有必要对适当的 ECM 进行正确建模。我们的研究评估了省略中低频数据对推理结果的影响,结果表明,可以省略相当一部分低频测量,而不会严重影响提取系统参数的准确性。这一框架指出了通过智能减少低频数据收集和允许即时 EIS 测量来加快 EIS 采集的途径
{"title":"An Assessment of Commonly Used Equivalent Circuit Models for Corrosion Analysis: A Bayesian Approach to Electrochemical Impedance Spectroscopy","authors":"Runze Zhang, Debashish Sur, Kangming Li, Julia Witt, Robert Black, Alexander Whittingham, John R. Scully, Jason Hattrick-Simpers","doi":"arxiv-2407.20297","DOIUrl":"https://doi.org/arxiv-2407.20297","url":null,"abstract":"Electrochemical Impedance Spectroscopy (EIS) is a crucial technique for\u0000assessing corrosion of a metallic materials. The analysis of EIS hinges on the\u0000selection of an appropriate equivalent circuit model (ECM) that accurately\u0000characterizes the system under study. In this work, we systematically examined\u0000the applicability of three commonly used ECMs across several typical material\u0000degradation scenarios. By applying Bayesian Inference to simulated corrosion\u0000EIS data, we assessed the suitability of these ECMs under different corrosion\u0000conditions and identified regions where the EIS data lacks sufficient\u0000information to statistically substantiate the ECM structure. Additionally, we\u0000posit that the traditional approach to EIS analysis, which often requires\u0000measurements to very low frequencies, might not be always necessary to\u0000correctly model the appropriate ECM. Our study assesses the impact of omitting\u0000data from low to medium-frequency ranges on inference results and reveals that\u0000a significant portion of low-frequency measurements can be excluded without\u0000substantially compromising the accuracy of extracting system parameters.\u0000Further, we propose simple checks to the posterior distributions of the ECM\u0000components and posterior predictions, which can be used to quantitatively\u0000evaluate the suitability of a particular ECM and the minimum frequency required\u0000to be measured. This framework points to a pathway for expediting EIS\u0000acquisition by intelligently reducing low-frequency data collection and\u0000permitting on-the-fly EIS measurements","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Wigner-Smith time-delay of flux conserving systems is a real quantity that measures how long an excitation resides in an interaction region. The complex generalization of time-delay to non-Hermitian systems is still under development, in particular, its statistical properties in the short-wavelength limit of complex chaotic scattering systems has not been investigated. From the experimentally measured multi-port scattering ($S$)-matrices of one-dimensional graphs, a two-dimensional billiard, and a three-dimensional cavity, we calculate the complex Wigner-Smith ($tau_{WS}$), as well as each individual reflection ($tau_{xx}$) and transmission ($tau_{xy}$) time-delays. The complex reflection time-delay differences ($tau_{delta R}$) between each port are calculated, and the transmission time-delay differences ($tau_{delta T}$) are introduced for systems exhibiting non-reciprocal scattering. Large time-delays are associated with coherent perfect absorption, reflectionless scattering, slow light, and uni-directional invisibility. We demonstrate that the large-delay tails of the distributions of the real and imaginary parts of each of these time-delay quantities are superuniversal, independent of experimental parameters: uniform attenuation $eta$, number of scattering channels $M$, wave propagation dimension $mathcal{D}$, and Dyson symmetry class $beta$. This superuniversality is in direct contrast with the well-established time-delay statistics of unitary scattering systems, where the tail of the $tau_{WS}$ distribution depends explicitly on the values of $M$ and $beta$. Due to the direct analogy of the wave equations, the time-delay statistics described in this paper are applicable to any non-Hermitian wave-chaotic scattering system in the short-wavelength limit, such as quantum graphs, electromagnetic, optical and acoustic resonators, etc.
{"title":"Superuniversal Statistics of Complex Time-Delays in Non-Hermitian Scattering Systems","authors":"Nadav Shaibe, Jared M. Erb, Steven M. Anlage","doi":"arxiv-2408.05343","DOIUrl":"https://doi.org/arxiv-2408.05343","url":null,"abstract":"The Wigner-Smith time-delay of flux conserving systems is a real quantity\u0000that measures how long an excitation resides in an interaction region. The\u0000complex generalization of time-delay to non-Hermitian systems is still under\u0000development, in particular, its statistical properties in the short-wavelength\u0000limit of complex chaotic scattering systems has not been investigated. From the\u0000experimentally measured multi-port scattering ($S$)-matrices of one-dimensional\u0000graphs, a two-dimensional billiard, and a three-dimensional cavity, we\u0000calculate the complex Wigner-Smith ($tau_{WS}$), as well as each individual\u0000reflection ($tau_{xx}$) and transmission ($tau_{xy}$) time-delays. The\u0000complex reflection time-delay differences ($tau_{delta R}$) between each port\u0000are calculated, and the transmission time-delay differences ($tau_{delta T}$)\u0000are introduced for systems exhibiting non-reciprocal scattering. Large\u0000time-delays are associated with coherent perfect absorption, reflectionless\u0000scattering, slow light, and uni-directional invisibility. We demonstrate that\u0000the large-delay tails of the distributions of the real and imaginary parts of\u0000each of these time-delay quantities are superuniversal, independent of\u0000experimental parameters: uniform attenuation $eta$, number of scattering\u0000channels $M$, wave propagation dimension $mathcal{D}$, and Dyson symmetry\u0000class $beta$. This superuniversality is in direct contrast with the\u0000well-established time-delay statistics of unitary scattering systems, where the\u0000tail of the $tau_{WS}$ distribution depends explicitly on the values of $M$\u0000and $beta$. Due to the direct analogy of the wave equations, the time-delay\u0000statistics described in this paper are applicable to any non-Hermitian\u0000wave-chaotic scattering system in the short-wavelength limit, such as quantum\u0000graphs, electromagnetic, optical and acoustic resonators, etc.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georgios Valogiannis, Francisco Villaescusa-Navarro, Marco Baldi
We present the first application of the Wavelet Scattering Transform (WST) in order to constrain the nature of gravity using the three-dimensional (3D) large-scale structure of the universe. Utilizing the Quijote-MG N-body simulations, we can reliably model the 3D matter overdensity field for the f(R) Hu-Sawicki modified gravity (MG) model down to $k_{rm max}=0.5$ h/Mpc. Combining these simulations with the Quijote $nu$CDM collection, we then conduct a Fisher forecast of the marginalized constraints obtained on gravity using the WST coefficients and the matter power spectrum at redshift z=0. Our results demonstrate that the WST substantially improves upon the 1$sigma$ error obtained on the parameter that captures deviations from standard General Relativity (GR), yielding a tenfold improvement compared to the corresponding matter power spectrum result. At the same time, the WST also enhances the precision on the $Lambda$CDM parameters and the sum of neutrino masses, by factors of 1.2-3.4 compared to the matter power spectrum, respectively. Despite the overall reduction in the WST performance when we focus on larger scales, it still provides a relatively $4.5times$ tighter 1$sigma$ error for the MG parameter at $k_{rm max}=0.2$ h/Mpc, highlighting its great sensitivity to the underlying gravity theory. This first proof-of-concept study reaffirms the constraining properties of the WST technique and paves the way for exciting future applications in order to perform precise large-scale tests of gravity with the new generation of cutting-edge cosmological data.
{"title":"Towards unveiling the large-scale nature of gravity with the wavelet scattering transform","authors":"Georgios Valogiannis, Francisco Villaescusa-Navarro, Marco Baldi","doi":"arxiv-2407.18647","DOIUrl":"https://doi.org/arxiv-2407.18647","url":null,"abstract":"We present the first application of the Wavelet Scattering Transform (WST) in\u0000order to constrain the nature of gravity using the three-dimensional (3D)\u0000large-scale structure of the universe. Utilizing the Quijote-MG N-body\u0000simulations, we can reliably model the 3D matter overdensity field for the f(R)\u0000Hu-Sawicki modified gravity (MG) model down to $k_{rm max}=0.5$ h/Mpc.\u0000Combining these simulations with the Quijote $nu$CDM collection, we then\u0000conduct a Fisher forecast of the marginalized constraints obtained on gravity\u0000using the WST coefficients and the matter power spectrum at redshift z=0. Our\u0000results demonstrate that the WST substantially improves upon the 1$sigma$\u0000error obtained on the parameter that captures deviations from standard General\u0000Relativity (GR), yielding a tenfold improvement compared to the corresponding\u0000matter power spectrum result. At the same time, the WST also enhances the\u0000precision on the $Lambda$CDM parameters and the sum of neutrino masses, by\u0000factors of 1.2-3.4 compared to the matter power spectrum, respectively. Despite\u0000the overall reduction in the WST performance when we focus on larger scales, it\u0000still provides a relatively $4.5times$ tighter 1$sigma$ error for the MG\u0000parameter at $k_{rm max}=0.2$ h/Mpc, highlighting its great sensitivity to the\u0000underlying gravity theory. This first proof-of-concept study reaffirms the\u0000constraining properties of the WST technique and paves the way for exciting\u0000future applications in order to perform precise large-scale tests of gravity\u0000with the new generation of cutting-edge cosmological data.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The photovoltaics (PV) technology landscape is evolving rapidly. To predict the potential and scalability of emerging PV technologies, a global understanding of these systems' performance is essential. Traditionally, experimental and computational studies at large national research facilities have focused on PV performance in specific regional climates. However, synthesizing these regional studies to understand the worldwide performance potential has proven difficult. Given the expense of obtaining experimental data, the challenge of coordinating experiments at national labs across a politically-divided world, and the data-privacy concerns of large commercial operators, however, a fundamentally different, data-efficient approach is desired. Here, we present a physics-guided machine learning (PGML) scheme to demonstrate that: (a) The world can be divided into a few PV-specific climate zones, called PVZones, illustrating that the relevant meteorological conditions are shared across continents; (b) by exploiting the climatic similarities, high-quality monthly energy yield data from as few as five locations can accurately predict yearly energy yield potential with high spatial resolution and a root mean square error of less than 8 kWhm$^{2}$, and (c) even with noisy, heterogeneous public PV performance data, the global energy yield can be predicted with less than 6% relative error compared to physics-based simulations provided that the dataset is representative. This PGML scheme is agnostic to PV technology and farm topology, making it adaptable to new PV technologies or farm configurations. The results encourage physics-guided, data-driven collaboration among national policymakers and research organizations to build efficient decision support systems for accelerated PV qualification and deployment across the world.
{"title":"Physics-guided machine learning predicts the planet-scale performance of solar farms with sparse, heterogeneous, public data","authors":"Jabir Bin Jahangir, Muhammad Ashraful Alam","doi":"arxiv-2407.18284","DOIUrl":"https://doi.org/arxiv-2407.18284","url":null,"abstract":"The photovoltaics (PV) technology landscape is evolving rapidly. To predict\u0000the potential and scalability of emerging PV technologies, a global\u0000understanding of these systems' performance is essential. Traditionally,\u0000experimental and computational studies at large national research facilities\u0000have focused on PV performance in specific regional climates. However,\u0000synthesizing these regional studies to understand the worldwide performance\u0000potential has proven difficult. Given the expense of obtaining experimental\u0000data, the challenge of coordinating experiments at national labs across a\u0000politically-divided world, and the data-privacy concerns of large commercial\u0000operators, however, a fundamentally different, data-efficient approach is\u0000desired. Here, we present a physics-guided machine learning (PGML) scheme to\u0000demonstrate that: (a) The world can be divided into a few PV-specific climate\u0000zones, called PVZones, illustrating that the relevant meteorological conditions\u0000are shared across continents; (b) by exploiting the climatic similarities,\u0000high-quality monthly energy yield data from as few as five locations can\u0000accurately predict yearly energy yield potential with high spatial resolution\u0000and a root mean square error of less than 8 kWhm$^{2}$, and (c) even with\u0000noisy, heterogeneous public PV performance data, the global energy yield can be\u0000predicted with less than 6% relative error compared to physics-based\u0000simulations provided that the dataset is representative. This PGML scheme is\u0000agnostic to PV technology and farm topology, making it adaptable to new PV\u0000technologies or farm configurations. The results encourage physics-guided,\u0000data-driven collaboration among national policymakers and research\u0000organizations to build efficient decision support systems for accelerated PV\u0000qualification and deployment across the world.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzing complex experimental data with multiple parameters is challenging. We propose using Singular Value Decomposition (SVD) as an effective solution. This method, demonstrated through real experimental data analysis, surpasses conventional approaches in understanding complex physics data. Singular values and vectors distinguish and highlight various physical mechanisms and scales, revealing previously challenging elements. SVD emerges as a powerful tool for navigating complex experimental landscapes, showing promise for diverse experimental measurements.
{"title":"Unraveling Complexity: Singular Value Decomposition in Complex Experimental Data Analysis","authors":"Judith F. Stein, Aviad Frydman, Richard Berkovits","doi":"arxiv-2407.16267","DOIUrl":"https://doi.org/arxiv-2407.16267","url":null,"abstract":"Analyzing complex experimental data with multiple parameters is challenging.\u0000We propose using Singular Value Decomposition (SVD) as an effective solution.\u0000This method, demonstrated through real experimental data analysis, surpasses\u0000conventional approaches in understanding complex physics data. Singular values\u0000and vectors distinguish and highlight various physical mechanisms and scales,\u0000revealing previously challenging elements. SVD emerges as a powerful tool for\u0000navigating complex experimental landscapes, showing promise for diverse\u0000experimental measurements.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The synchronization analysis of limit-cycle oscillators is prevalent in many fields, including physics, chemistry, and life sciences. It relies on the phase calculation that utilizes measurements. However, the synchronization of spatiotemporal dynamics cannot be analyzed because a standardized method for calculating the phase has not been established. The presence of spatial structure complicates the determination of which measurements should be used for accurate phase calculation. To address this, we explore a method for calculating the phase from the time series of measurements taken at a single spatial grid point. The phase is calculated to increase linearly between event times when the measurement time series intersects the Poincar'e section. The difference between the calculated phase and the isochron-based phase, resulting from the discrepancy between the isochron and the Poincar'e section, is evaluated using a linear approximation near the limit-cycle solution. We found that the difference is small when measurements are taken from regions that dominate the rhythms of the entire spatiotemporal dynamics. Furthermore, we investigate an alternative method where the Poincar'e section is applied to the time series obtained through orthogonal decomposition of the entire spatiotemporal dynamics. We present two decomposition schemes that utilize the principal component analysis. For illustration, the phase is calculated from the measurements of spatiotemporal dynamics exhibiting target waves or oscillating spots, simulated by weakly coupled FitzHugh-Nagumo reaction-diffusion models.
{"title":"Setting of the Poincaré section for accurately calculating the phase of rhythmic spatiotemporal dynamics","authors":"Takahiro Arai, Yoji Kawamura, Toshio Aoyagi","doi":"arxiv-2407.16080","DOIUrl":"https://doi.org/arxiv-2407.16080","url":null,"abstract":"The synchronization analysis of limit-cycle oscillators is prevalent in many\u0000fields, including physics, chemistry, and life sciences. It relies on the phase\u0000calculation that utilizes measurements. However, the synchronization of\u0000spatiotemporal dynamics cannot be analyzed because a standardized method for\u0000calculating the phase has not been established. The presence of spatial\u0000structure complicates the determination of which measurements should be used\u0000for accurate phase calculation. To address this, we explore a method for\u0000calculating the phase from the time series of measurements taken at a single\u0000spatial grid point. The phase is calculated to increase linearly between event\u0000times when the measurement time series intersects the Poincar'e section. The\u0000difference between the calculated phase and the isochron-based phase, resulting\u0000from the discrepancy between the isochron and the Poincar'e section, is\u0000evaluated using a linear approximation near the limit-cycle solution. We found\u0000that the difference is small when measurements are taken from regions that\u0000dominate the rhythms of the entire spatiotemporal dynamics. Furthermore, we\u0000investigate an alternative method where the Poincar'e section is applied to\u0000the time series obtained through orthogonal decomposition of the entire\u0000spatiotemporal dynamics. We present two decomposition schemes that utilize the\u0000principal component analysis. For illustration, the phase is calculated from\u0000the measurements of spatiotemporal dynamics exhibiting target waves or\u0000oscillating spots, simulated by weakly coupled FitzHugh-Nagumo\u0000reaction-diffusion models.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiral Magnetic Effect (CME) is a phenomenon in which electric charge is separated by a strong magnetic field from local domains of chirality imbalance and parity violation in quantum chromodynamics (QCD). The CME-sensitive observable, charge-dependent three-point azimuthal correlator $Deltagamma$, is contaminated by a major physics background proportional to the particle's elliptic anisotropy $v_2$. Event-shape engineering (ESE) binning events in dynamical fluctuations of $v_2$ and event-shape selection (ESS) binning events in statistical fluctuations of $v_2$ are two methods to search for the CME by projecting $Deltagamma$ to the $v_2=0$ intercept. We conduct a systematic study of these two methods using physics models as well as toy model simulations. It is observed that the ESE method requires significantly more statistics than the ESS method to achieve the same statistical precision of the intercept. It is found that the intercept from the ESS method depends on the details of the event content, such as the mixtures of background contributing sources, and thus is not a clean measure of the CME.
{"title":"Investigating the Event-Shape Methods in Search for the Chiral Magnetic Effect in Relativistic Heavy Ion Collisions","authors":"Han-Sheng Li, Yicheng Feng, Fuqiang Wang","doi":"arxiv-2407.14489","DOIUrl":"https://doi.org/arxiv-2407.14489","url":null,"abstract":"Chiral Magnetic Effect (CME) is a phenomenon in which electric charge is\u0000separated by a strong magnetic field from local domains of chirality imbalance\u0000and parity violation in quantum chromodynamics (QCD). The CME-sensitive\u0000observable, charge-dependent three-point azimuthal correlator $Deltagamma$,\u0000is contaminated by a major physics background proportional to the particle's\u0000elliptic anisotropy $v_2$. Event-shape engineering (ESE) binning events in\u0000dynamical fluctuations of $v_2$ and event-shape selection (ESS) binning events\u0000in statistical fluctuations of $v_2$ are two methods to search for the CME by\u0000projecting $Deltagamma$ to the $v_2=0$ intercept. We conduct a systematic\u0000study of these two methods using physics models as well as toy model\u0000simulations. It is observed that the ESE method requires significantly more\u0000statistics than the ESS method to achieve the same statistical precision of the\u0000intercept. It is found that the intercept from the ESS method depends on the\u0000details of the event content, such as the mixtures of background contributing\u0000sources, and thus is not a clean measure of the CME.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some blockchain networks employ a distributed consensus algorithm featuring Byzantine fault tolerance. Notably, certain public chains, such as Cosmos and Tezos, which operate on a proof-of-stake mechanism, have adopted this algorithm. While it is commonly assumed that these blockchains maintain a nearly constant block creation time, empirical analysis reveals fluctuations in this interval; this phenomenon has received limited attention. In this paper, we propose a mathematical model to account for the processes of block propagation and validation within Byzantine fault-tolerant consensus blockchains, aiming to theoretically analyze the probability distribution of block time. First, we propose stochastic processes governing the broadcasting communications among validator nodes. Consequently, we theoretically demonstrate that the probability distribution of broadcast time among validator nodes adheres to the Gumbel distribution. This finding indicates that the distribution of block time typically arises from convolving multiple Gumbel distributions. Additionally, we derive an approximate formula for the block time distribution suitable for data analysis purposes. By fitting this approximation to real-world block time data, we demonstrate the consistent estimation of block time distribution parameters.
{"title":"Theoretical Analysis on Block Time Distributions in Byzantine Fault-Tolerant Consensus Blockchains","authors":"Akihiro Fujihara","doi":"arxiv-2407.14299","DOIUrl":"https://doi.org/arxiv-2407.14299","url":null,"abstract":"Some blockchain networks employ a distributed consensus algorithm featuring\u0000Byzantine fault tolerance. Notably, certain public chains, such as Cosmos and\u0000Tezos, which operate on a proof-of-stake mechanism, have adopted this\u0000algorithm. While it is commonly assumed that these blockchains maintain a\u0000nearly constant block creation time, empirical analysis reveals fluctuations in\u0000this interval; this phenomenon has received limited attention. In this paper,\u0000we propose a mathematical model to account for the processes of block\u0000propagation and validation within Byzantine fault-tolerant consensus\u0000blockchains, aiming to theoretically analyze the probability distribution of\u0000block time. First, we propose stochastic processes governing the broadcasting\u0000communications among validator nodes. Consequently, we theoretically\u0000demonstrate that the probability distribution of broadcast time among validator\u0000nodes adheres to the Gumbel distribution. This finding indicates that the\u0000distribution of block time typically arises from convolving multiple Gumbel\u0000distributions. Additionally, we derive an approximate formula for the block\u0000time distribution suitable for data analysis purposes. By fitting this\u0000approximation to real-world block time data, we demonstrate the consistent\u0000estimation of block time distribution parameters.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bulcsú Sándor, András Rusu, Károly Dénes, Mária Ercsey-Ravasz, Zsolt I. Lázár
There is a growing interest in methods for detecting and interpreting changes in experimental time evolution data. Based on measured time series, the quantitative characterization of dynamical phase transitions at bifurcation points of the underlying chaotic systems is a notoriously difficult task. Building on prior theoretical studies that focus on the discontinuities at $q=1$ in the order-$q$ R'enyi-entropy of the trajectory space, we measure the derivative of the spectrum. We derive within the general context of Markov processes a computationally efficient closed-form expression for this measure. We investigate its properties through well-known dynamical systems exploring its scope and limitations. The proposed mathematical instrument can serve as a predictor of dynamical phase transitions in time series.
{"title":"Measuring dynamical phase transitions in time series","authors":"Bulcsú Sándor, András Rusu, Károly Dénes, Mária Ercsey-Ravasz, Zsolt I. Lázár","doi":"arxiv-2407.13452","DOIUrl":"https://doi.org/arxiv-2407.13452","url":null,"abstract":"There is a growing interest in methods for detecting and interpreting changes\u0000in experimental time evolution data. Based on measured time series, the\u0000quantitative characterization of dynamical phase transitions at bifurcation\u0000points of the underlying chaotic systems is a notoriously difficult task.\u0000Building on prior theoretical studies that focus on the discontinuities at\u0000$q=1$ in the order-$q$ R'enyi-entropy of the trajectory space, we measure the\u0000derivative of the spectrum. We derive within the general context of Markov\u0000processes a computationally efficient closed-form expression for this measure.\u0000We investigate its properties through well-known dynamical systems exploring\u0000its scope and limitations. The proposed mathematical instrument can serve as a\u0000predictor of dynamical phase transitions in time series.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"125 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the first detection of a novel astrophysical phenomenon, scientific standards are particularly high. Especially in a multi-messenger context, there are also opportunity costs to follow-up observations on any detection claims. So in searching for the still elusive lensed gravitational waves, care needs to be taken in controlling false positives. In particular, many methods for identifying strong lensing rely on some form of parameter similarity or waveform consistency, which under rapidly growing catalog sizes can expose them to false positives from coincident but unlensed events if proper care is not taken. And searches for waveform deformations in all lensing regimes are subject to degeneracies we need to mitigate between lensing, intrinsic parameters, insufficiently modelled effects such as orbital eccentricity, or even deviations from general relativity. Robust lensing studies also require understanding and mitigating glitches and non-stationarities in the detector data. This article reviews sources of possible false positives (and their flip side: false negatives) in gravitational-wave lensing searches and the main approaches the community is pursuing to mitigate them.
{"title":"False positives for gravitational lensing: the gravitational-wave perspective","authors":"David Keitel","doi":"arxiv-2407.12974","DOIUrl":"https://doi.org/arxiv-2407.12974","url":null,"abstract":"For the first detection of a novel astrophysical phenomenon, scientific\u0000standards are particularly high. Especially in a multi-messenger context, there\u0000are also opportunity costs to follow-up observations on any detection claims.\u0000So in searching for the still elusive lensed gravitational waves, care needs to\u0000be taken in controlling false positives. In particular, many methods for\u0000identifying strong lensing rely on some form of parameter similarity or\u0000waveform consistency, which under rapidly growing catalog sizes can expose them\u0000to false positives from coincident but unlensed events if proper care is not\u0000taken. And searches for waveform deformations in all lensing regimes are\u0000subject to degeneracies we need to mitigate between lensing, intrinsic\u0000parameters, insufficiently modelled effects such as orbital eccentricity, or\u0000even deviations from general relativity. Robust lensing studies also require\u0000understanding and mitigating glitches and non-stationarities in the detector\u0000data. This article reviews sources of possible false positives (and their flip\u0000side: false negatives) in gravitational-wave lensing searches and the main\u0000approaches the community is pursuing to mitigate them.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}