We review the issue of localization in quantum field theory and detail the nonrelativistic limit. Three distinct localization schemes are examined: the Newton–Wigner, the algebraic quantum field theory, and the modal scheme. Among these, the algebraic quantum field theory provides a fundamental concept of localization, rooted in its axiomatic formulation. In contrast, the Newton–Wigner scheme draws inspiration from the Born interpretation, applying mainly to the nonrelativistic regime. The modal scheme, relying on the representation of single particles as positive frequency modes of the Klein–Gordon equation, is found to be incompatible with the algebraic quantum field theory localization.
This review delves into the distinctive features of each scheme, offering a comparative analysis. A specific focus is placed on the property of independence between state preparations and observable measurements in spacelike separated regions. Notably, the notion of localization in algebraic quantum field theory violates this independence due to the Reeh–Schlieder theorem. Drawing parallels with the quantum teleportation protocol, it is argued that causality remains unviolated. Additionally, we consider the nonrelativistic limit of quantum field theory, revealing the emergence of the Born scheme as the fundamental concept of localization. Consequently, the nonlocality associated with the Reeh–Schlieder theorem is shown to be suppressed under nonrelativistic conditions.
In modern collider experiments, the quest to explore fundamental interactions between elementary particles has reached unparalleled levels of precision. Signatures from particle physics detectors are low-level objects (such as energy depositions or tracks) encoding the physics of collisions (the final state particles of hard scattering interactions). The complete simulation of them in a detector is a computational and storage-intensive task. To address this computational bottleneck in particle physics, alternative approaches have been developed, introducing additional assumptions and trade off accuracy for speed. The field has seen a surge in interest in surrogate modeling the detector simulation, fueled by the advancements in deep generative models. These models aim to generate responses that are statistically identical to the observed data. In this paper, we conduct a comprehensive and exhaustive taxonomic review of the existing literature on the simulation of detector signatures from both methodological and application-wise perspectives. Initially, we formulate the problem of detector signature simulation and discuss its different variations that can be unified. Next, we classify the state-of-the-art methods into five distinct categories based on their underlying model architectures, summarizing their respective generation strategies. Finally, we shed light on the challenges and opportunities that lie ahead in detector signature simulation, setting the stage for future research and development.
Pyrochlore oxide, A2B2O7, is known as a strongly correlated system with magnetic frustration caused by spins forming a network of corner-sharing tetrahedrons. A systematic literature review has been carried out for the pyrochlore oxide material A2B2O7, with A = Nd, and B = Ru, Ir, Hf, Pb, Mo, and Zr. One of the materials receiving attention from some researchers is the system with A = Nd and B = Zr (Nd2Zr2O7). It is reported that Nd2Zr2O7 gives rise to a magnetic fragmentation state in a magnetically ordered state. However, this phenomenon has not been confirmed in other Nd systems. The magnetic fragmentation phenomenon is a phenomenon that explains the magnetic ground state condition of the Nd2Zr2O7 pyrochlore material, so knowing whether this phenomenon or the signs of this phenomenon appearing in all pyrochlore material, especially in Nd-based pyrochlore systems is very important to be reviewed. Review articles regarding pyrochlore with various bases such as Gd, Er, and Tb were already published. However, the systematic literature review regarding Nd-based pyrochlore focusing on its magnetic properties is not available yet. The most important result of this review is that Nd-based pyrochlores with different B ions show different magnetic transitions. Moreover, the emergence of magnetic fragmentation states in magnetically ordered states was not found in systems other than Nd2Zr2O7. In the future, studies of Nd-based pyrochlore can also focus on the correlation between physical properties and magnetic properties, together with its possible application.
We provide a perspective on the fundamental relationship between physics and computation, exploring the conditions under which a physical system can be harnessed for computation and the practical means to achieve this. Unlike traditional digital computers that impose discreteness on continuous substrates, unconventional computing embraces the inherent properties of physical systems. Exploring simultaneously the intricacies of physical implementations and applied computational paradigms, we discuss the interdisciplinary developments of unconventional computing. Here, we focus on the potential of photonic substrates for unconventional computing, implementing artificial neural networks to solve data-driven machine learning tasks. Several photonic neural network implementations are discussed, highlighting their potential advantages over electronic counterparts in terms of speed and energy efficiency. Finally, we address the challenges of achieving learning and programmability within physical substrates, outlining key strategies for future research.
The detection of out-of-distribution data points is a common task in particle physics. It is used for monitoring complex particle detectors or for identifying rare and unexpected events that may be indicative of new phenomena or physics beyond the Standard Model. Recent advances in Machine Learning for anomaly detection have encouraged the utilization of such techniques on particle physics problems. This review article provides an overview of the state-of-the-art techniques for anomaly detection in particle physics using machine learning. We discuss the challenges associated with anomaly detection in large and complex data sets, such as those produced by high-energy particle colliders, and highlight some of the successful applications of anomaly detection in particle physics experiments.
The present review aims to show that a modified space–time with an invariant minimum speed provides a relation with Weyl geometry in the Newtonian approximation of weak-field. The deformed Special Relativity so-called Symmetrical Special Relativity (SSR) has an invariant minimum speed , which is associated with a preferred reference frame for representing the vacuum energy, thus leading to the cosmological constant (). The equation of state (EOS) of vacuum energy for , i.e., emerges naturally from such space–time, where is the pressure and is the vacuum energy density. With the aim of establishing a relationship between and in the modified metric of the space–time, we should consider a dark spherical universe with Hubble radius , having a very low value of that governs the accelerated expansion of universe. In doing this, we aim to show that SSR-metric has an equivalence with a de-Sitter (dS)-metric (). On the other hand, according to the Boomerang experiment that reveals a slightly accelerated expansion of the universe, SSR leads to a dS-metric with an approximation for close to a flat space–time, which is in the scenario where the space is quasi-flat, so that . We have by representing dark cold matter, for matter and for the vacuum energy. Thus, the theory is adjusted for the redshift . This corresponds to the time
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, due to the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, “experience-driven” layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized through a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters.
In this white paper, we lay down our plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient. We consider a selected set of use cases to highlight the specific needs of different applications.
The inevitable progress from betatron to high energy circular induction accelerators is described. On the basis of an example of induction synchrotrons that has been put to practical use, characteristics and figures of benefit of circular induction accelerators are explained in detail. Details of the slow cycling induction synchrotron and the fast-cycling induction synchrotron which have been proven at KEK since 2004 are described. The existing RF acceleration systems in the KEK 12 GeV proton synchrotron and the 500 MeV Booster ring have been fully replaced by induction acceleration systems. A series of studies performed after the demonstration of induction synchrotron strongly indicated that existing circular RF accelerators such as cyclotron, microtron, and FFAG can be replaced by circular induction accelerators with increased flexibility and performance. Attractive applications such as the next generation of hadron therapy, the high energy cluster ion generator, or heavy ion inertial fusion are introduced.