Pub Date : 2025-10-25DOI: 10.1134/S1063779625700510
T. Zh. Bezhanyan, S. Shadmehri, O. I. Streltsova, M. I. Zuev, M. Yu. Bondarev, I. A. Kolesnikova, Yu. S. Severiukhin, D. M. Utina
The behavioral test “Morris Water Maze” is a useful device to study spatial learning, behavioral reactions and memory of small laboratory animals. Within the framework of the joint project between MLIT and LRB JINR aimed at creating an information system, we have designed a web service to automate the analysis of experimental data related to the behavioral test “Morris Water Maze.” In data analysis, the automation tasks are concerned about the analysis of video data. Meanwhile, the development of convenient tools can significantly reduce the research time and the human factor influence. Here we present the results of the development of a web service which is designed to annotate and classify the data for the trajectories of rodent movements in the “Morris Water Maze” behavioral test. The functionality of the service enables a person to monitor the correctness of the constructed trajectory, classify the trajectory based on a deep learning approach and obtain a number of characteristic parameters of the rodent movements during the experiment. The web service was developed and deployed based on the ML/DL/HPC ecosystem of the HybriLIT Heterogeneous Computing Platform.
{"title":"Algorithm for the Analysis of the Laboratory Animal Trajectories in the “Morris Water Maze” and Its Implementation as a Web Service","authors":"T. Zh. Bezhanyan, S. Shadmehri, O. I. Streltsova, M. I. Zuev, M. Yu. Bondarev, I. A. Kolesnikova, Yu. S. Severiukhin, D. M. Utina","doi":"10.1134/S1063779625700510","DOIUrl":"10.1134/S1063779625700510","url":null,"abstract":"<p>The behavioral test “Morris Water Maze” is a useful device to study spatial learning, behavioral reactions and memory of small laboratory animals. Within the framework of the joint project between MLIT and LRB JINR aimed at creating an information system, we have designed a web service to automate the analysis of experimental data related to the behavioral test “Morris Water Maze.” In data analysis, the automation tasks are concerned about the analysis of video data. Meanwhile, the development of convenient tools can significantly reduce the research time and the human factor influence. Here we present the results of the development of a web service which is designed to annotate and classify the data for the trajectories of rodent movements in the “Morris Water Maze” behavioral test. The functionality of the service enables a person to monitor the correctness of the constructed trajectory, classify the trajectory based on a deep learning approach and obtain a number of characteristic parameters of the rodent movements during the experiment. The web service was developed and deployed based on the ML/DL/HPC ecosystem of the HybriLIT Heterogeneous Computing Platform.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1370 - 1374"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S106377962570039X
G. A. Ososkov
A key stage in off-line processing of the experimental HEP data is the reconstruction of trajectories (tracks) of the interacting particles from measurement data. For modern high-luminosity collider experiments, such as HL-LHC and NICA, a particular challenge for tracking is the very high, megahertz frequency of interactions, leading to an order-of-magnitude increase in the intensity of the data stream to be processed and, in addition, to a significant overlap of event track data when they are registered in track detectors. All these circumstances, recognized by physicists as the “Tracking Crisis,” have shown that the tracking algorithms already in use are not efficient, accurate, and scalable enough to handle data obtained in high-luminosity experiments. To overcome this crisis, in 2018, a group of physicists from CERN and other physics centers in the HEPTrkX project staged a TrackML competition to develop new solutions to tracking problems using deep neural networks. A data set for their training and testing was prepared and published on the Kaggle platform. The TrackML competition stimulated a lot of important research leading to the development of effective tracking algorithms based on graph neural networks, transformers, as well as the reanimation of tracking based on Hopfield neural networks, enhanced with computational means of adiabatic quantum computers. The experience in the development of tracking algorithms based on machine learning methods, accumulated during the last decade by the specialists from MLIT JINR, allowed them to actively engage in research on overcoming the problems of the tracking crisis not only by using the information from already published results, but also through original innovations, taking into account the specificity of domestic detectors in the high-luminosity experiments of the NICA megaproject at JINR. In the present report, we make a a brief review of the ongoing work and discuss its prospects.
{"title":"Deep Learning Methods as a Tool for Overcoming the Crisis of Particle Tracking in High Luminosity HEP Experiments","authors":"G. A. Ososkov","doi":"10.1134/S106377962570039X","DOIUrl":"10.1134/S106377962570039X","url":null,"abstract":"<p>A key stage in off-line processing of the experimental HEP data is the reconstruction of trajectories (tracks) of the interacting particles from measurement data. For modern high-luminosity collider experiments, such as HL-LHC and NICA, a particular challenge for tracking is the very high, megahertz frequency of interactions, leading to an order-of-magnitude increase in the intensity of the data stream to be processed and, in addition, to a significant overlap of event track data when they are registered in track detectors. All these circumstances, recognized by physicists as the “Tracking Crisis,” have shown that the tracking algorithms already in use are not efficient, accurate, and scalable enough to handle data obtained in high-luminosity experiments. To overcome this crisis, in 2018, a group of physicists from CERN and other physics centers in the HEPTrkX project staged a TrackML competition to develop new solutions to tracking problems using deep neural networks. A data set for their training and testing was prepared and published on the Kaggle platform. The TrackML competition stimulated a lot of important research leading to the development of effective tracking algorithms based on graph neural networks, transformers, as well as the reanimation of tracking based on Hopfield neural networks, enhanced with computational means of adiabatic quantum computers. The experience in the development of tracking algorithms based on machine learning methods, accumulated during the last decade by the specialists from MLIT JINR, allowed them to actively engage in research on overcoming the problems of the tracking crisis not only by using the information from already published results, but also through original innovations, taking into account the specificity of domestic detectors in the high-luminosity experiments of the NICA megaproject at JINR. In the present report, we make a a brief review of the ongoing work and discuss its prospects.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1299 - 1308"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700637
A. S. Galoyan, T. Q. T. Le, A. V. Taranenko, V. V. Uzhinsky
The ultrarelativistic quantum molecular dynamic (UrQMD) model is widely applied for simulation of multi-particle production in hadron-nucleus and nucleus-nucleus interactions at high energies. In order to describe evaporation and multi-fragmentation of nuclear residuals we have enlarged the UrQMD model version 3.4 with the statistical multi-fragmentation model (SMM). The coupling of UrQMD and SMM allows one to well describe neutron and nuclear fragment productions using the EoS mode of the UrQMD model. The UrQMD 3.4 model enlarged by the SMM model can be applied at NICA and FAIR experiments.
{"title":"Coupling of UrQMD 3.4 and SMM Models for Simulation of Neutron and Nuclear Fragment Productions in Nucleus–Nucleus Interactions","authors":"A. S. Galoyan, T. Q. T. Le, A. V. Taranenko, V. V. Uzhinsky","doi":"10.1134/S1063779625700637","DOIUrl":"10.1134/S1063779625700637","url":null,"abstract":"<p>The ultrarelativistic quantum molecular dynamic (UrQMD) model is widely applied for simulation of multi-particle production in hadron-nucleus and nucleus-nucleus interactions at high energies. In order to describe evaporation and multi-fragmentation of nuclear residuals we have enlarged the UrQMD model version 3.4 with the statistical multi-fragmentation model (SMM). The coupling of UrQMD and SMM allows one to well describe neutron and nuclear fragment productions using the EoS mode of the UrQMD model. The UrQMD 3.4 model enlarged by the SMM model can be applied at NICA and FAIR experiments.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1433 - 1438"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145357996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700765
P. A. Kupriyanov, N. V. Rudavin, D. G. Kagramanyan, V. F. Mayboroda, D. A. Derkach, R. A. Shakhovoy
In various applications of optical communications, such as quantum key distribution (QKD) systems, the task of polarization control in optical fiber arises. A polarization controller (PC) on the receiver side is used to solve this problem. In our work, we investigate two machine learning approaches to polarization control: supervised learning (SL) and reinforcement learning (RL). We generalised analytical solution of the polarization control problem using SL-approach. Supervised learning algorithm was trained on simulations and then was validated in experimental setup. We compared supervised approach with analytical solution. The advantage of RL approach based on the capability of approximating nonlinear effects related to the imperfections of detection process. For the effective operation of polarization control algorithms, knowledge of the parameters of the PC is required. Reinforcement learning agent also eliminates control hardware calibration procedures, since the agent recognize necessary system parameters based on the experience of interacting with the environment. RL-agent was firstly pretrained on simulations and then was fine-tuned on the practical QKD system. We demonstrated that RL-agent’s tuning provides the 10% quantum bit error rate (QBER) on our setup, and converges in 10 steps.
{"title":"Polarization Drift Compensation for Quantum Key Distribution Using Machine Learning","authors":"P. A. Kupriyanov, N. V. Rudavin, D. G. Kagramanyan, V. F. Mayboroda, D. A. Derkach, R. A. Shakhovoy","doi":"10.1134/S1063779625700765","DOIUrl":"10.1134/S1063779625700765","url":null,"abstract":"<p>In various applications of optical communications, such as quantum key distribution (QKD) systems, the task of polarization control in optical fiber arises. A polarization controller (PC) on the receiver side is used to solve this problem. In our work, we investigate two machine learning approaches to polarization control: supervised learning (SL) and reinforcement learning (RL). We generalised analytical solution of the polarization control problem using SL-approach. Supervised learning algorithm was trained on simulations and then was validated in experimental setup. We compared supervised approach with analytical solution. The advantage of RL approach based on the capability of approximating nonlinear effects related to the imperfections of detection process. For the effective operation of polarization control algorithms, knowledge of the parameters of the PC is required. Reinforcement learning agent also eliminates control hardware calibration procedures, since the agent recognize necessary system parameters based on the experience of interacting with the environment. RL-agent was firstly pretrained on simulations and then was fine-tuned on the practical QKD system. We demonstrated that RL-agent’s tuning provides the 10% quantum bit error rate (QBER) on our setup, and converges in 10 steps.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1503 - 1508"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700649
V. Gareyan, N. Margaryan, Zh. Gevorkian
Antireflectivity is crucial for optimizing the efficiency of solar cells. Achieving effective anti-reflection with high-refractive-index materials is challenging due to their inherently high reflectivity, as described by Fresnel equations. Recent research has explored the potential of nanorough surfaces–where the roughness parameters are much smaller than the wavelength of incident light. This focus has shifted with the advent of modern theoretical approaches that incorporate modified boundary conditions. Our study examines weakly rough opaque surfaces and demonstrates significant differences in predictions for the scattering coefficients compared to older theories. These findings are validated by experimental results on nano-roughened silicon films across wavelengths of 300–400 nm. At this part of the spectrum, the reflection is shown to decrease significantly, which opens new possibilities for solar cell technology to harness energy from previously inaccessible regions of the spectrum.
{"title":"Modern Theoretical Approach for Description of Antireflectivity","authors":"V. Gareyan, N. Margaryan, Zh. Gevorkian","doi":"10.1134/S1063779625700649","DOIUrl":"10.1134/S1063779625700649","url":null,"abstract":"<p>Antireflectivity is crucial for optimizing the efficiency of solar cells. Achieving effective anti-reflection with high-refractive-index materials is challenging due to their inherently high reflectivity, as described by Fresnel equations. Recent research has explored the potential of nanorough surfaces–where the roughness parameters are much smaller than the wavelength of incident light. This focus has shifted with the advent of modern theoretical approaches that incorporate modified boundary conditions. Our study examines weakly rough opaque surfaces and demonstrates significant differences in predictions for the scattering coefficients compared to older theories. These findings are validated by experimental results on nano-roughened silicon films across wavelengths of 300–400 nm. At this part of the spectrum, the reflection is shown to decrease significantly, which opens new possibilities for solar cell technology to harness energy from previously inaccessible regions of the spectrum.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1439 - 1443"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700650
A. S. Gevorkyan, A. V. Bogdanov, V. V. Mareev
A general three-body problem is formulated on a curved geometry related to the energy surface of the system of bodies, which allows us to reveal hidden symmetries of the internal motion of a dynamical system and describe it by a system of stiff 6th-order ODEs instead of the usual 8th-order ones. In this formulation, the three-body problem is equivalent to the problem of propagation of a flow of geodesic trajectories on a 3D Riemannian manifold. A new criterion for the divergence of close geodesic trajectories is defined, similar to the Lyapunov exponent only on finite time intervals. Using the stochastic equation of motion of a system of bodies, a second-order partial differential equation of the Fokker-Planck type is derived for the probability distribution of geodesics (PDG) in phase space. Using PDG in a current tube, the entropy of a low-dimensional dynamical system is constructed and its complexity and disequilibrium are estimated. The behavior of new timing parameter (internal time) in global or 3D Jacobi space is studied in detail and its dimension is calculated.
{"title":"Three-Body Problem in Conformal-Euclidean Space: Complexity of a Low-Dimensional System","authors":"A. S. Gevorkyan, A. V. Bogdanov, V. V. Mareev","doi":"10.1134/S1063779625700650","DOIUrl":"10.1134/S1063779625700650","url":null,"abstract":"<p>A general three-body problem is formulated on a curved geometry related to the energy surface of the system of bodies, which allows us to reveal hidden symmetries of the internal motion of a dynamical system and describe it by a system of stiff 6th-order ODEs instead of the usual 8th-order ones. In this formulation, the three-body problem is equivalent to the problem of propagation of a flow of geodesic trajectories on a 3D Riemannian manifold. A new criterion for the divergence of close geodesic trajectories is defined, similar to the Lyapunov exponent only on finite time intervals. Using the stochastic equation of motion of a system of bodies, a second-order partial differential equation of the Fokker-Planck type is derived for the probability distribution of geodesics (PDG) in phase space. Using PDG in a current tube, the entropy of a low-dimensional dynamical system is constructed and its complexity and disequilibrium are estimated. The behavior of new timing parameter (<i>internal time</i>) in global or 3D Jacobi space is studied in detail and its dimension is calculated.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1444 - 1448"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700339
Gh. Adam, S. Adam
Prerequisites for the implementation of an artificial intelligence (AI) driven automatic adaptive quadrature of the one-dimensional Riemann integral are formulated. The need of this approach follows from the occurrence of critical circumstances, which result in code fragility preventing the derivation of a reliable and fast output since their handling is out of the control of existing automatic adaptive quadrature algorithms. In the near future, the artificial intelligence generated process is expected to show narrow AI capabilities (the ability to solve a single task at a time), with limited memory AI functionalities (few persistent memory features).
{"title":"Prerequisites for Artificial Intelligence Driven Automatic Adaptive Quadrature","authors":"Gh. Adam, S. Adam","doi":"10.1134/S1063779625700339","DOIUrl":"10.1134/S1063779625700339","url":null,"abstract":"<p>Prerequisites for the implementation of an artificial intelligence (AI) driven automatic adaptive quadrature of the one-dimensional Riemann integral are formulated. The need of this approach follows from the occurrence of critical circumstances, which result in code <i>fragility</i> preventing the derivation of a <i>reliable</i> and <i>fast</i> output since their handling is out of the control of existing automatic adaptive quadrature algorithms. In the near future, the artificial intelligence generated process is expected to show <i>narrow AI capabilities</i> (the ability to solve a <i>single task</i> at a time), with <i>limited memory AI functionalities</i> (<i>few persistent memory</i> features).</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1255 - 1263"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700820
T. I. Mikhailova, B. Erdemchimeg
Nuclear fragmentation reactions are important to study the characteristics of nuclear matter, to produce secondary beams and to obtain new isotopes. It is necessary to be able to predict the yields of isotopes produced in these reactions. Different models exist which allow to make the predictions of these quantities. In this report transport-statistical approach is discussed. The comparison of model calculations with experimental data obtained with the COMBAS set-up at Flerov Laboratory of Nuclear Reactions of the Joint Institute for Nucler Research for the collisions of 18O (35 MeV per nucleon) beam with 9Be and 181Ta targets and with several well known models is presented. The target dependence of isotope ratios is studied. Its behaviour is explained by the strong correlation between the mass of a secondary fragment and an impact parameter of the reaction. The different pathways of reaction dynamics in case of heavy and light targets are discussed.
{"title":"Modified Transport Approach for Description of Fragmentation Reactions in Heavy-Ion Collisions","authors":"T. I. Mikhailova, B. Erdemchimeg","doi":"10.1134/S1063779625700820","DOIUrl":"10.1134/S1063779625700820","url":null,"abstract":"<p>Nuclear fragmentation reactions are important to study the characteristics of nuclear matter, to produce secondary beams and to obtain new isotopes. It is necessary to be able to predict the yields of isotopes produced in these reactions. Different models exist which allow to make the predictions of these quantities. In this report transport-statistical approach is discussed. The comparison of model calculations with experimental data obtained with the COMBAS set-up at Flerov Laboratory of Nuclear Reactions of the Joint Institute for Nucler Research for the collisions of <sup>18</sup>O (35 MeV per nucleon) beam with <sup>9</sup>Be and <sup>181</sup>Ta targets and with several well known models is presented. The target dependence of isotope ratios is studied. Its behaviour is explained by the strong correlation between the mass of a secondary fragment and an impact parameter of the reaction. The different pathways of reaction dynamics in case of heavy and light targets are discussed.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1536 - 1542"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700431
E. Alexandrov, I. Alexandrov, A. Chebotov, I. Filozova, K. Gertsenberger, D. Priakhina, I. Romanov, G. Shestakova, A. Yakovlev
The collection, storage and processing of experimental data are an integral part of modern high-energy physics experiments. The Configuration Information System (CIS) is an essential element of a complex of information systems developed for particle collision experiments. The CIS has been developed and implemented for the BM@N experiment to store and provide data on the configuration of the experiment hardware and software systems while collecting data from the detectors in the online mode. The CIS allows loading configuration information into the data acquisition and online processing systems, activating the hardware setups and launching all necessary software tasks with the required parameters on specified distributed nodes. The architecture of the CIS mainly contains the User Web Interface, Configuration Database to store configuration data, and the continually-operating Configuration Manager, which uses the API of the chosen Dynamic Deployment System (DDS) developed by the FAIR collaboration for managing a set of intercommunicating processes. The Web interface has convenient features to create and view a configuration topology, as well as to monitor the online tasks. The CIS provides rich error reporting and logging facilities for both individual tasks and whole work sessions.
{"title":"Production of the Configuration Information System for the BM@N Experiment","authors":"E. Alexandrov, I. Alexandrov, A. Chebotov, I. Filozova, K. Gertsenberger, D. Priakhina, I. Romanov, G. Shestakova, A. Yakovlev","doi":"10.1134/S1063779625700431","DOIUrl":"10.1134/S1063779625700431","url":null,"abstract":"<p>The collection, storage and processing of experimental data are an integral part of modern high-energy physics experiments. The Configuration Information System (CIS) is an essential element of a complex of information systems developed for particle collision experiments. The CIS has been developed and implemented for the BM@N experiment to store and provide data on the configuration of the experiment hardware and software systems while collecting data from the detectors in the online mode. The CIS allows loading configuration information into the data acquisition and online processing systems, activating the hardware setups and launching all necessary software tasks with the required parameters on specified distributed nodes. The architecture of the CIS mainly contains the User Web Interface, Configuration Database to store configuration data, and the continually-operating Configuration Manager, which uses the API of the chosen Dynamic Deployment System (DDS) developed by the FAIR collaboration for managing a set of intercommunicating processes. The Web interface has convenient features to create and view a configuration topology, as well as to monitor the online tasks. The CIS provides rich error reporting and logging facilities for both individual tasks and whole work sessions.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1328 - 1333"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1134/S1063779625700686
M. Hnatič, M. Kecer, T. Lučivjanský, L. Mižišin, Yu. G. Molotkov
The universal properties of the dynamic isotropic percolation process are analyzed employing the field-theoretic perturbative renormalization group approach. In particular, our aim is to discuss recent developments related to the three-loop calculations of the dynamic exponent (z). The model is studied in the vicinity of the upper critical dimension ({{d}_{{text{c}}}} = 6) by means of dimensional regularization and accompanied by the minimal subtraction method for an extraction of ultraviolet divergences. Preliminary results are presented for selected topologies of three-loop Feynman diagrams appearing in the Dyson equation for the propagator of the model.
{"title":"Dynamic Isotropic Percolation Process: Three-Loop Approximation","authors":"M. Hnatič, M. Kecer, T. Lučivjanský, L. Mižišin, Yu. G. Molotkov","doi":"10.1134/S1063779625700686","DOIUrl":"10.1134/S1063779625700686","url":null,"abstract":"<p>The universal properties of the dynamic isotropic percolation process are analyzed employing the field-theoretic perturbative renormalization group approach. In particular, our aim is to discuss recent developments related to the three-loop calculations of the dynamic exponent <span>(z)</span>. The model is studied in the vicinity of the upper critical dimension <span>({{d}_{{text{c}}}} = 6)</span> by means of dimensional regularization and accompanied by the minimal subtraction method for an extraction of ultraviolet divergences. Preliminary results are presented for selected topologies of three-loop Feynman diagrams appearing in the Dyson equation for the propagator of the model.</p>","PeriodicalId":729,"journal":{"name":"Physics of Particles and Nuclei","volume":"56 6","pages":"1462 - 1466"},"PeriodicalIF":0.5,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}