To understand and control the dynamics of coupled oscillators, it is important to reveal the structure of the interaction network from observed data. While various techniques have been developed for inferring the network of asynchronous systems, it remains challenging to infer the network of synchronized oscillators without external stimulations. In this study, we develop a method for non-invasively inferring the network of synchronized and/or de-synchronized oscillators. An approach to network inference would be to fit the data to a set of differential equations describing the dynamics of phase oscillators. However, we show that this method fails to infer the true network due to the problems that arise when we use short-time phase differences. Therefore, we propose a method based on the circle map, which describes the phase change in one oscillatory cycle. We demonstrate the efficacy of the proposed method through the successful inference of the network structure from simulated data of limit cycle oscillator models. Our method provides a unified and concise framework for network estimation for a wide class of oscillator systems.
{"title":"Network inference from oscillatory signals based on circle map","authors":"Akari Matsuki, Hiroshi Kori, Ryota Kobayashi","doi":"arxiv-2407.07445","DOIUrl":"https://doi.org/arxiv-2407.07445","url":null,"abstract":"To understand and control the dynamics of coupled oscillators, it is\u0000important to reveal the structure of the interaction network from observed\u0000data. While various techniques have been developed for inferring the network of\u0000asynchronous systems, it remains challenging to infer the network of\u0000synchronized oscillators without external stimulations. In this study, we\u0000develop a method for non-invasively inferring the network of synchronized\u0000and/or de-synchronized oscillators. An approach to network inference would be\u0000to fit the data to a set of differential equations describing the dynamics of\u0000phase oscillators. However, we show that this method fails to infer the true\u0000network due to the problems that arise when we use short-time phase\u0000differences. Therefore, we propose a method based on the circle map, which\u0000describes the phase change in one oscillatory cycle. We demonstrate the\u0000efficacy of the proposed method through the successful inference of the network\u0000structure from simulated data of limit cycle oscillator models. Our method\u0000provides a unified and concise framework for network estimation for a wide\u0000class of oscillator systems.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ablikim, M. N. Achasov, P. Adlarson, O. Afedulidis, X. C. Ai, R. Aliberti, A. Amoroso, Q. An, Y. Bai, O. Bakina, I. Balossino, Y. Ban, H. -R. Bao, V. Batozskaya, K. Begzsuren, N. Berger, M. Berlowski, M. Bertani, D. Bettoni, F. Bianchi, E. Bianco, A. Bortone, I. Boyko, R. A. Briere, A. Brueggemann, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, X. Y. Chai, J. F. Chang, G. R. Che, Y. Z. Che, G. Chelkov, C. Chen, C. H. Chen, Chao Chen, G. Chen, H. S. Chen, H. Y. Chen, M. L. Chen, S. J. Chen, S. L. Chen, S. M. Chen, T. Chen, X. R. Chen, X. T. Chen, Y. B. Chen, Y. Q. Chen, Z. J. Chen, Z. Y. Chen, S. K. Choi, G. Cibinetto, F. Cossio, J. J. Cui, H. L. Dai, J. P. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, C. Q. Deng, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, B. Ding, X. X. Ding, Y. Ding, Y. Ding, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, M. C. Du, S. X. Du, Y. Y. Duan, Z. H. Duan, P. Egorov, Y. H. Fan, J. Fang, J. Fang, S. S. Fang, W. X. Fang, Y. Fang, Y. Q. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, Y. T. Feng, M. Fritsch, C. D. Fu, J. L. Fu, Y. W. Fu, H. Gao, X. B. Gao, Y. N. Gao, Yang Gao, S. Garbolino, I. Garzia, L. Ge, P. T. Ge, Z. W. Ge, C. Geng, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, S. Gramigna, M. Greco, M. H. Gu, Y. T. Gu, C. Y. Guan, A. Q. Guo, L. B. Guo, M. J. Guo, R. P. Guo, Y. P. Guo, A. Guskov, J. Gutierrez, K. L. Han, T. T. Han, F. Hanisch, X. Q. Hao, F. A. Harris, K. K. He, K. L. He, F. H. Heinsius, C. H. Heinz, Y. K. Heng, C. Herold, T. Holtmann, P. C. Hong, G. Y. Hou, X. T. Hou, Y. R. Hou, Z. L. Hou, B. Y. Hu, H. M. Hu, J. F. Hu, S. L. Hu, T. Hu, Y. Hu, G. S. Huang, K. X. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Y. S. Huang, T. Hussain, F. Hölzken, N. Hüsken, N. in der Wiesche, J. Jackson, S. Janchiv, J. H. Jeong, Q. Ji, Q. P. Ji, W. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, X. Q. Jia, Z. K. Jia, D. Jiang, H. B. Jiang, P. C. Jiang, S. S. Jiang, T. J. Jiang, X. S. Jiang, Y. Jiang, J. B. Jiao, J. K. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, X. M. Jing, T. Johansson, S. Kabana, N. Kalantar-Nayestanaki, X. L. Kang, X. S. Kang, M. Kavatsyuk, B. C. Ke, V. Khachatryan, A. Khoukaz, R. Kiuchi, O. B. Kolcu, B. Kopf, M. Kuessner, X. Kui, N. Kumar, A. Kupsc, W. Kühn, J. J. Lane, L. Lavezzi, T. T. Lei, Z. H. Lei, M. Lellmann, T. Lenz, C. Li, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. B. Li, H. J. Li, H. N. Li, Hui Li, J. R. Li, J. S. Li, K. Li, K. L. Li, L. J. Li, L. K. Li, Lei Li, M. H. Li, P. R. Li, Q. M. Li, Q. X. Li, R. Li, S. X. Li, T. Li, W. D. Li, W. G. Li, X. Li, X. H. Li, X. L. Li, X. Y. Li, X. Z. Li, Y. G. Li, Z. J. Li, Z. Y. Li, C. Liang, H. Liang, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, Y. P. Liao, J. Libby, A. Limphirat, C. C. Lin, D. X. Lin, T. Lin, B. J. Liu, B. X. Liu, C. Liu, C. X. Liu, F. Liu, F. H. Liu, Feng Liu, G. M. Liu, H. Liu, H. B. Liu, H. H. Liu, H. M. Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, L. C. Liu, Lu Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, T. Liu, W. K. Liu, W. M. Liu, X. Liu, X. Liu, Y. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. D. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, Z. H. Lu, C. L. Luo, J. R. Luo, M. X. Luo, T. Luo, X. L. Luo, X. R. Lyu, Y. F. Lyu, F. C. Ma, H. Ma, H. L. Ma, J. L. Ma, L. L. Ma, L. R. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, T. Ma, X. T. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, I. MacKay, M. Maggiora, S. Malde, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, H. Miao, T. J. Min, R. E. Mitchell, X. H. Mo, B. Moses, N. Yu. Muchnoi, J. Muskalla, Y. Nefedov, F. Nerling, L. S. Nie, I. B. Nikolaev, Z. Ning, S. Nisar, Q. L. Niu, W. D. Niu, Y. Niu, S. L. Olsen, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, Y. P. Pei, M. Pelizaeus, H. P. Peng, Y. Y. Peng, K. Peters, J. L. Ping, R. G. Ping, S. Plura, V. Prasad, F. Z. Qi, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, C. F. Qiao, X. K. Qiao, J. J. Qin, L. Q. Qin, L. Y. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, Z. H. Qu, C. F. Redmer, K. J. Ren, A. Rivetti, M. Rolo, G. Rong, Ch. Rosner, M. Q. Ruan, S. N. Ruan, N. Salone, A. Sarantsev, Y. Schelhaas, K. Schoenning, M. Scodeggio, K. Y. Shan, W. Shan, X. Y. Shan, Z. J. Shang, J. F. Shangguan, L. G. Shao, M. Shao, C. P. Shen, H. F. Shen, W. H. Shen, X. Y. Shen, B. A. Shi, H. Shi, H. C. Shi, J. L. Shi, J. Y. Shi, Q. Q. Shi, S. Y. Shi, X. Shi, J. J. Song, T. Z. Song, W. M. Song, Y. J. Song, Y. X. Song, S. Sosio, S. Spataro, F. Stieler, S. S Su, Y. J. Su, G. B. Sun, G. X. Sun, H. Sun, H. K. Sun, J. F. Sun, K. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. Sun, Y. J. Sun, Y. Z. Sun, Z. Q. Sun, Z. T. Sun, C. J. Tang, G. Y. Tang, J. Tang, M. Tang, Y. A. Tang, L. Y. Tao, Q. T. Tao, M. Tat, J. X. Teng, V. Thoren, W. H. Tian, Y. Tian, Z. F. Tian, I. Uman, Y. Wan, S. J. Wang, B. Wang, B. L. Wang, Bo Wang, D. Y. Wang, F. Wang, H. J. Wang, J. J. Wang, J. P. Wang, K. Wang, L. L. Wang, M. Wang, N. Y. Wang, S. Wang, S. Wang, T. Wang, T. J. Wang, W. Wang, W. Wang, W. P. Wang, X. Wang, X. F. Wang, X. J. Wang, X. L. Wang, X. N. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. L. Wang, Y. N. Wang, Y. Q. Wang, Yaqian Wang, Yi Wang, Z. Wang, Z. L. Wang, Z. Y. Wang, Ziyi Wang, D. H. Wei, F. Weidner, S. P. Wen, Y. R. Wen, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, C. Wu, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, X. H. Wu, Y. Wu, Y. H. Wu, Y. J. Wu, Z. Wu, L. Xia, X. M. Xian, B. H. Xiang, T. Xiang, D. Xiao, G. Y. Xiao, S. Y. Xiao, Y. L. Xiao, Z. J. Xiao, C. Xie, X. H. Xie, Y. Xie, Y. G. Xie, Y. H. Xie, Z. P. Xie, T. Y. Xing, C. F. Xu, C. J. Xu, G. F. Xu, H. Y. Xu, M. Xu, Q. J. Xu, Q. N. Xu, W. Xu, W. L. Xu, X. P. Xu, Y. Xu, Y. C. Xu, Z. S. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, X. Q. Yan, H. J. Yang, H. L. Yang, H. X. Yang, T. Yang, Y. Yang, Y. F. Yang, Y. F. Yang, Y. X. Yang, Z. W. Yang, Z. P. Yao, M. Ye, M. H. Ye, J. H. Yin, Junhao Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, M. C. Yu, T. Yu, X. D. Yu, Y. C. Yu, C. Z. Yuan, J. Yuan, J. Yuan, L. Yuan, S. C. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. A. Zafar, F. R. Zeng, S. H. Zeng, X. Zeng, Y. Zeng, Y. J. Zeng, Y. J. Zeng, X. Y. Zhai, Y. C. Zhai, Y. H. Zhan, A. Q. Zhang, B. L. Zhang, B. X. Zhang, D. H. Zhang, G. Y. Zhang, H. Zhang, H. Zhang, H. C. Zhang, H. H. Zhang, H. H. Zhang, H. Q. Zhang, H. R. Zhang, H. Y. Zhang, J. Zhang, J. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. S. Zhang, J. W. Zhang, J. X. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, L. M. Zhang, Lei Zhang, P. Zhang, Q. Y. Zhang, R. Y. Zhang, S. H. Zhang, Shulei Zhang, X. M. Zhang, X. Y Zhang, X. Y. Zhang, Y. Zhang, Y. Zhang, Y. T. Zhang, Y. H. Zhang, Y. M. Zhang, Yan Zhang, Z. D. Zhang, Z. H. Zhang, Z. L. Zhang, Z. Y. Zhang, Z. Y. Zhang, Z. Z. Zhang, G. Zhao, J. Y. Zhao, J. Z. Zhao, L. Zhao, Lei Zhao, M. G. Zhao, N. Zhao, R. P. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, B. M. Zheng, J. P. Zheng, W. J. Zheng, Y. H. Zheng, B. Zhong, X. Zhong, H. Zhou, J. Y. Zhou, L. P. Zhou, S. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, Y. Z. Zhou, Z. C. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, K. S. Zhu, L. Zhu, L. X. Zhu, S. H. Zhu, T. J. Zhu, W. D. Zhu, Y. C. Zhu, Z. A. Zhu, J. H. Zou, J. Zu
The $e^+e^-rightarrow D_s^+D_{s1}(2536)^-$ and $e^+e^-rightarrow D_s^+D^*_{s2}(2573)^-$ processes are studied using data samples collected with the BESIII detector at center-of-mass energies from 4.530 to 4.946~GeV. The absolute branching fractions of $D_{s1}(2536)^- rightarrow bar{D}^{*0}K^-$ and $D_{s2}^*(2573)^- rightarrow bar{D}^0K^-$ are measured for the first time to be $(35.9pm 4.8pm 3.5)%$ and $(37.4pm 3.1pm 4.6)%$, respectively. The measurements are in tension with predictions based on the assumption that the $D_{s1}(2536)$ and $D_{s2}^*(2573)$ are dominated by a bare $cbar{s}$ component. The $e^+e^-rightarrow D_s^+D_{s1}(2536)^-$ and $e^+e^-rightarrow D_s^+D^*_{s2}(2573)^-$ cross sections are measured, and a resonant structure at around 4.6~GeV with a width of 50~MeV is observed for the first time with a statistical significance of $15sigma$ in the $e^+e^-rightarrow D_s^+D^*_{s2}(2573)^-$ process. It could be the $Y(4626)$ found by the Belle collaboration in the $D_s^+D_{s1}(2536)^{-}$ final state, since they have similar masses and widths. There is also evidence for a structure at around 4.75~GeV in both processes.
{"title":"Study of the decay and production properties of $D_{s1}(2536)$ and $D_{s2}^*(2573)$","authors":"M. Ablikim, M. N. Achasov, P. Adlarson, O. Afedulidis, X. C. Ai, R. Aliberti, A. Amoroso, Q. An, Y. Bai, O. Bakina, I. Balossino, Y. Ban, H. -R. Bao, V. Batozskaya, K. Begzsuren, N. Berger, M. Berlowski, M. Bertani, D. Bettoni, F. Bianchi, E. Bianco, A. Bortone, I. Boyko, R. A. Briere, A. Brueggemann, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, X. Y. Chai, J. F. Chang, G. R. Che, Y. Z. Che, G. Chelkov, C. Chen, C. H. Chen, Chao Chen, G. Chen, H. S. Chen, H. Y. Chen, M. L. Chen, S. J. Chen, S. L. Chen, S. M. Chen, T. Chen, X. R. Chen, X. T. Chen, Y. B. Chen, Y. Q. Chen, Z. J. Chen, Z. Y. Chen, S. K. Choi, G. Cibinetto, F. Cossio, J. J. Cui, H. L. Dai, J. P. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, C. Q. Deng, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, B. Ding, X. X. Ding, Y. Ding, Y. Ding, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, M. C. Du, S. X. Du, Y. Y. Duan, Z. H. Duan, P. Egorov, Y. H. Fan, J. Fang, J. Fang, S. S. Fang, W. X. Fang, Y. Fang, Y. Q. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, Y. T. Feng, M. Fritsch, C. D. Fu, J. L. Fu, Y. W. Fu, H. Gao, X. B. Gao, Y. N. Gao, Yang Gao, S. Garbolino, I. Garzia, L. Ge, P. T. Ge, Z. W. Ge, C. Geng, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, S. Gramigna, M. Greco, M. H. Gu, Y. T. Gu, C. Y. Guan, A. Q. Guo, L. B. Guo, M. J. Guo, R. P. Guo, Y. P. Guo, A. Guskov, J. Gutierrez, K. L. Han, T. T. Han, F. Hanisch, X. Q. Hao, F. A. Harris, K. K. He, K. L. He, F. H. Heinsius, C. H. Heinz, Y. K. Heng, C. Herold, T. Holtmann, P. C. Hong, G. Y. Hou, X. T. Hou, Y. R. Hou, Z. L. Hou, B. Y. Hu, H. M. Hu, J. F. Hu, S. L. Hu, T. Hu, Y. Hu, G. S. Huang, K. X. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Y. S. Huang, T. Hussain, F. Hölzken, N. Hüsken, N. in der Wiesche, J. Jackson, S. Janchiv, J. H. Jeong, Q. Ji, Q. P. Ji, W. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, X. Q. Jia, Z. K. Jia, D. Jiang, H. B. Jiang, P. C. Jiang, S. S. Jiang, T. J. Jiang, X. S. Jiang, Y. Jiang, J. B. Jiao, J. K. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, X. M. Jing, T. Johansson, S. Kabana, N. Kalantar-Nayestanaki, X. L. Kang, X. S. Kang, M. Kavatsyuk, B. C. Ke, V. Khachatryan, A. Khoukaz, R. Kiuchi, O. B. Kolcu, B. Kopf, M. Kuessner, X. Kui, N. Kumar, A. Kupsc, W. Kühn, J. J. Lane, L. Lavezzi, T. T. Lei, Z. H. Lei, M. Lellmann, T. Lenz, C. Li, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. B. Li, H. J. Li, H. N. Li, Hui Li, J. R. Li, J. S. Li, K. Li, K. L. Li, L. J. Li, L. K. Li, Lei Li, M. H. Li, P. R. Li, Q. M. Li, Q. X. Li, R. Li, S. X. Li, T. Li, W. D. Li, W. G. Li, X. Li, X. H. Li, X. L. Li, X. Y. Li, X. Z. Li, Y. G. Li, Z. J. Li, Z. Y. Li, C. Liang, H. Liang, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, Y. P. Liao, J. Libby, A. Limphirat, C. C. Lin, D. X. Lin, T. Lin, B. J. Liu, B. X. Liu, C. Liu, C. X. Liu, F. Liu, F. H. Liu, Feng Liu, G. M. Liu, H. Liu, H. B. Liu, H. H. Liu, H. M. Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, L. C. Liu, Lu Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, T. Liu, W. K. Liu, W. M. Liu, X. Liu, X. Liu, Y. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. D. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, Z. H. Lu, C. L. Luo, J. R. Luo, M. X. Luo, T. Luo, X. L. Luo, X. R. Lyu, Y. F. Lyu, F. C. Ma, H. Ma, H. L. Ma, J. L. Ma, L. L. Ma, L. R. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, T. Ma, X. T. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, I. MacKay, M. Maggiora, S. Malde, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, H. Miao, T. J. Min, R. E. Mitchell, X. H. Mo, B. Moses, N. Yu. Muchnoi, J. Muskalla, Y. Nefedov, F. Nerling, L. S. Nie, I. B. Nikolaev, Z. Ning, S. Nisar, Q. L. Niu, W. D. Niu, Y. Niu, S. L. Olsen, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, Y. P. Pei, M. Pelizaeus, H. P. Peng, Y. Y. Peng, K. Peters, J. L. Ping, R. G. Ping, S. Plura, V. Prasad, F. Z. Qi, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, C. F. Qiao, X. K. Qiao, J. J. Qin, L. Q. Qin, L. Y. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, Z. H. Qu, C. F. Redmer, K. J. Ren, A. Rivetti, M. Rolo, G. Rong, Ch. Rosner, M. Q. Ruan, S. N. Ruan, N. Salone, A. Sarantsev, Y. Schelhaas, K. Schoenning, M. Scodeggio, K. Y. Shan, W. Shan, X. Y. Shan, Z. J. Shang, J. F. Shangguan, L. G. Shao, M. Shao, C. P. Shen, H. F. Shen, W. H. Shen, X. Y. Shen, B. A. Shi, H. Shi, H. C. Shi, J. L. Shi, J. Y. Shi, Q. Q. Shi, S. Y. Shi, X. Shi, J. J. Song, T. Z. Song, W. M. Song, Y. J. Song, Y. X. Song, S. Sosio, S. Spataro, F. Stieler, S. S Su, Y. J. Su, G. B. Sun, G. X. Sun, H. Sun, H. K. Sun, J. F. Sun, K. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. Sun, Y. J. Sun, Y. Z. Sun, Z. Q. Sun, Z. T. Sun, C. J. Tang, G. Y. Tang, J. Tang, M. Tang, Y. A. Tang, L. Y. Tao, Q. T. Tao, M. Tat, J. X. Teng, V. Thoren, W. H. Tian, Y. Tian, Z. F. Tian, I. Uman, Y. Wan, S. J. Wang, B. Wang, B. L. Wang, Bo Wang, D. Y. Wang, F. Wang, H. J. Wang, J. J. Wang, J. P. Wang, K. Wang, L. L. Wang, M. Wang, N. Y. Wang, S. Wang, S. Wang, T. Wang, T. J. Wang, W. Wang, W. Wang, W. P. Wang, X. Wang, X. F. Wang, X. J. Wang, X. L. Wang, X. N. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. L. Wang, Y. N. Wang, Y. Q. Wang, Yaqian Wang, Yi Wang, Z. Wang, Z. L. Wang, Z. Y. Wang, Ziyi Wang, D. H. Wei, F. Weidner, S. P. Wen, Y. R. Wen, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, C. Wu, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, X. H. Wu, Y. Wu, Y. H. Wu, Y. J. Wu, Z. Wu, L. Xia, X. M. Xian, B. H. Xiang, T. Xiang, D. Xiao, G. Y. Xiao, S. Y. Xiao, Y. L. Xiao, Z. J. Xiao, C. Xie, X. H. Xie, Y. Xie, Y. G. Xie, Y. H. Xie, Z. P. Xie, T. Y. Xing, C. F. Xu, C. J. Xu, G. F. Xu, H. Y. Xu, M. Xu, Q. J. Xu, Q. N. Xu, W. Xu, W. L. Xu, X. P. Xu, Y. Xu, Y. C. Xu, Z. S. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, X. Q. Yan, H. J. Yang, H. L. Yang, H. X. Yang, T. Yang, Y. Yang, Y. F. Yang, Y. F. Yang, Y. X. Yang, Z. W. Yang, Z. P. Yao, M. Ye, M. H. Ye, J. H. Yin, Junhao Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, M. C. Yu, T. Yu, X. D. Yu, Y. C. Yu, C. Z. Yuan, J. Yuan, J. Yuan, L. Yuan, S. C. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. A. Zafar, F. R. Zeng, S. H. Zeng, X. Zeng, Y. Zeng, Y. J. Zeng, Y. J. Zeng, X. Y. Zhai, Y. C. Zhai, Y. H. Zhan, A. Q. Zhang, B. L. Zhang, B. X. Zhang, D. H. Zhang, G. Y. Zhang, H. Zhang, H. Zhang, H. C. Zhang, H. H. Zhang, H. H. Zhang, H. Q. Zhang, H. R. Zhang, H. Y. Zhang, J. Zhang, J. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. S. Zhang, J. W. Zhang, J. X. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, L. M. Zhang, Lei Zhang, P. Zhang, Q. Y. Zhang, R. Y. Zhang, S. H. Zhang, Shulei Zhang, X. M. Zhang, X. Y Zhang, X. Y. Zhang, Y. Zhang, Y. Zhang, Y. T. Zhang, Y. H. Zhang, Y. M. Zhang, Yan Zhang, Z. D. Zhang, Z. H. Zhang, Z. L. Zhang, Z. Y. Zhang, Z. Y. Zhang, Z. Z. Zhang, G. Zhao, J. Y. Zhao, J. Z. Zhao, L. Zhao, Lei Zhao, M. G. Zhao, N. Zhao, R. P. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, B. M. Zheng, J. P. Zheng, W. J. Zheng, Y. H. Zheng, B. Zhong, X. Zhong, H. Zhou, J. Y. Zhou, L. P. Zhou, S. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, Y. Z. Zhou, Z. C. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, K. S. Zhu, L. Zhu, L. X. Zhu, S. H. Zhu, T. J. Zhu, W. D. Zhu, Y. C. Zhu, Z. A. Zhu, J. H. Zou, J. Zu","doi":"arxiv-2407.07651","DOIUrl":"https://doi.org/arxiv-2407.07651","url":null,"abstract":"The $e^+e^-rightarrow D_s^+D_{s1}(2536)^-$ and $e^+e^-rightarrow\u0000D_s^+D^*_{s2}(2573)^-$ processes are studied using data samples collected with\u0000the BESIII detector at center-of-mass energies from 4.530 to 4.946~GeV. The\u0000absolute branching fractions of $D_{s1}(2536)^- rightarrow bar{D}^{*0}K^-$\u0000and $D_{s2}^*(2573)^- rightarrow bar{D}^0K^-$ are measured for the first time\u0000to be $(35.9pm 4.8pm 3.5)%$ and $(37.4pm 3.1pm 4.6)%$, respectively. The\u0000measurements are in tension with predictions based on the assumption that the\u0000$D_{s1}(2536)$ and $D_{s2}^*(2573)$ are dominated by a bare $cbar{s}$\u0000component. The $e^+e^-rightarrow D_s^+D_{s1}(2536)^-$ and $e^+e^-rightarrow\u0000D_s^+D^*_{s2}(2573)^-$ cross sections are measured, and a resonant structure at\u0000around 4.6~GeV with a width of 50~MeV is observed for the first time with a\u0000statistical significance of $15sigma$ in the $e^+e^-rightarrow\u0000D_s^+D^*_{s2}(2573)^-$ process. It could be the $Y(4626)$ found by the Belle\u0000collaboration in the $D_s^+D_{s1}(2536)^{-}$ final state, since they have\u0000similar masses and widths. There is also evidence for a structure at around\u00004.75~GeV in both processes.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Liang, Minghui Du, He Wang, Yuxiang Xu, Chang Liu, Xiaotong Wei, Peng Xu, Li-e Qiang, Ziren Luo
Detecting the coalescences of massive black hole binaries (MBHBs) is one of the primary targets for space-based gravitational wave observatories such as LISA, Taiji, and Tianqin. The fast and accurate parameter estimation of merging MBHBs is of great significance for both astrophysics and the global fitting of all resolvable sources. However, such analyses entail significant computational costs. To address these challenges, inspired by the latest progress in generative models, we proposed a novel artificial intelligence (AI) based parameter estimation method called Variance Preserving Flow Matching Posterior Estimation (VPFMPE). Specifically, we utilize triangular interpolation to maintain variance over time, thereby constructing a transport path for training continuous normalization flows. Compared to the simple linear interpolation method used in flow matching to construct the optimal transport path, our approach better captures continuous temporal variations, making it more suitable for the parameter estimation of MBHBs. Additionally, we creatively introduce a parameter transformation method based on the symmetry in the detector's response function. This transformation is integrated within VPFMPE, allowing us to train the model using a simplified dataset, and then perform parameter estimation on more general data, hence also acting as a crucial factor in improving the training speed. In conclusion, for the first time, within a comprehensive and reasonable parameter range, we have achieved a complete and unbiased 11-dimensional rapid inference for MBHBs in the presence of astrophysical confusion noise using ODE-based generative models. In the experiments based on simulated data, our model produces posterior distributions comparable to those obtained by nested sampling.
{"title":"Rapid Parameter Estimation for Merging Massive Black Hole Binaries Using ODE-Based Generative Models","authors":"Bo Liang, Minghui Du, He Wang, Yuxiang Xu, Chang Liu, Xiaotong Wei, Peng Xu, Li-e Qiang, Ziren Luo","doi":"arxiv-2407.07125","DOIUrl":"https://doi.org/arxiv-2407.07125","url":null,"abstract":"Detecting the coalescences of massive black hole binaries (MBHBs) is one of\u0000the primary targets for space-based gravitational wave observatories such as\u0000LISA, Taiji, and Tianqin. The fast and accurate parameter estimation of merging\u0000MBHBs is of great significance for both astrophysics and the global fitting of\u0000all resolvable sources. However, such analyses entail significant computational\u0000costs. To address these challenges, inspired by the latest progress in\u0000generative models, we proposed a novel artificial intelligence (AI) based\u0000parameter estimation method called Variance Preserving Flow Matching Posterior\u0000Estimation (VPFMPE). Specifically, we utilize triangular interpolation to\u0000maintain variance over time, thereby constructing a transport path for training\u0000continuous normalization flows. Compared to the simple linear interpolation\u0000method used in flow matching to construct the optimal transport path, our\u0000approach better captures continuous temporal variations, making it more\u0000suitable for the parameter estimation of MBHBs. Additionally, we creatively\u0000introduce a parameter transformation method based on the symmetry in the\u0000detector's response function. This transformation is integrated within VPFMPE,\u0000allowing us to train the model using a simplified dataset, and then perform\u0000parameter estimation on more general data, hence also acting as a crucial\u0000factor in improving the training speed. In conclusion, for the first time,\u0000within a comprehensive and reasonable parameter range, we have achieved a\u0000complete and unbiased 11-dimensional rapid inference for MBHBs in the presence\u0000of astrophysical confusion noise using ODE-based generative models. In the\u0000experiments based on simulated data, our model produces posterior distributions\u0000comparable to those obtained by nested sampling.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laurits Tani, Nalong-Norman Seeba, Hardi Vanaveski, Joosep Pata, Torben Lange
Tau leptons serve as an important tool for studying the production of Higgs and electroweak bosons, both within and beyond the Standard Model of particle physics. Accurate reconstruction and identification of hadronically decaying tau leptons is a crucial task for current and future high energy physics experiments. Given the advances in jet tagging, we demonstrate how tau lepton reconstruction can be decomposed into tau identification, kinematic reconstruction, and decay mode classification in a multi-task machine learning setup.Based on an electron-positron collision dataset with full detector simulation and reconstruction, we show that common jet tagging architectures can be effectively used for these subtasks. We achieve comparable momentum resolutions of 2-3% with all the tested models, while the precision of reconstructing individual decay modes is between 80-95%. This paper also serves as an introduction to a new publicly available Fu{tau}ure dataset and provides recipes for the development and training of tau reconstruction algorithms, while allowing to study resilience to domain shifts and the use of foundation models for such tasks.
{"title":"A unified machine learning approach for reconstructing hadronically decaying tau leptons","authors":"Laurits Tani, Nalong-Norman Seeba, Hardi Vanaveski, Joosep Pata, Torben Lange","doi":"arxiv-2407.06788","DOIUrl":"https://doi.org/arxiv-2407.06788","url":null,"abstract":"Tau leptons serve as an important tool for studying the production of Higgs\u0000and electroweak bosons, both within and beyond the Standard Model of particle\u0000physics. Accurate reconstruction and identification of hadronically decaying\u0000tau leptons is a crucial task for current and future high energy physics\u0000experiments. Given the advances in jet tagging, we demonstrate how tau lepton\u0000reconstruction can be decomposed into tau identification, kinematic\u0000reconstruction, and decay mode classification in a multi-task machine learning\u0000setup.Based on an electron-positron collision dataset with full detector\u0000simulation and reconstruction, we show that common jet tagging architectures\u0000can be effectively used for these subtasks. We achieve comparable momentum\u0000resolutions of 2-3% with all the tested models, while the precision of\u0000reconstructing individual decay modes is between 80-95%. This paper also serves\u0000as an introduction to a new publicly available Fu{tau}ure dataset and provides\u0000recipes for the development and training of tau reconstruction algorithms,\u0000while allowing to study resilience to domain shifts and the use of foundation\u0000models for such tasks.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leandro O. Zague, Daniel A. Castello, Carlos F. T. Matt
The novelty of the current work is precisely to propose a statistical procedure to combine estimates of the modal parameters provided by any set of Operational Modal Analysis (OMA) algorithms so as to avoid preference for a particular one and also to derive an approximate joint probability distribution of the modal parameters, from which engineering statistics of interest such as mean value and variance are readily provided. The effectiveness of the proposed strategy is assessed considering measured data from an actual centrifugal compressor. The statistics obtained for both forward and backward modal parameters are finally compared against modal parameters identified during standard stability verification testing (SVT) of centrifugal compressors prior to shipment, using classical Experimental Modal Analysis (EMA) algorithms. The current work demonstrates that combination of OMA algorithms can provide quite accurate estimates for both the modal parameters and the associated uncertainties with low computational costs.
{"title":"Combination of operational modal analysis algorithms to identify modal parameters of an actual centrifugal compressor","authors":"Leandro O. Zague, Daniel A. Castello, Carlos F. T. Matt","doi":"arxiv-2407.07273","DOIUrl":"https://doi.org/arxiv-2407.07273","url":null,"abstract":"The novelty of the current work is precisely to propose a statistical\u0000procedure to combine estimates of the modal parameters provided by any set of\u0000Operational Modal Analysis (OMA) algorithms so as to avoid preference for a\u0000particular one and also to derive an approximate joint probability distribution\u0000of the modal parameters, from which engineering statistics of interest such as\u0000mean value and variance are readily provided. The effectiveness of the proposed\u0000strategy is assessed considering measured data from an actual centrifugal\u0000compressor. The statistics obtained for both forward and backward modal\u0000parameters are finally compared against modal parameters identified during\u0000standard stability verification testing (SVT) of centrifugal compressors prior\u0000to shipment, using classical Experimental Modal Analysis (EMA) algorithms. The\u0000current work demonstrates that combination of OMA algorithms can provide quite\u0000accurate estimates for both the modal parameters and the associated\u0000uncertainties with low computational costs.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In large lakes, ice cover plays an important role in shipping and navigation, coastal erosion, regional weather and climate, and aquatic ecosystem function. In this study, a novel deep learning model for ice cover concentration prediction in Lake Michigan is introduced. The model uses hindcasted meteorological variables, water depth, and shoreline proximity as inputs, and NOAA ice charts for training, validation, and testing. The proposed framework leverages Convolution Long Short-Term Memory (ConvLSTM) and Convolution Neural Network (CNN) to capture both spatial and temporal dependencies between model input and output to simulate daily ice cover at 0.1{deg} resolution. The model performance was assessed through lake-wide average metrics and local error metrics, with detailed evaluations conducted at six distinct locations in Lake Michigan. The results demonstrated a high degree of agreement between the model's predictions and ice charts, with an average RMSE of 0.029 for the daily lake-wide average ice concentration. Local daily prediction errors were greater, with an average RMSE of 0.102. Lake-wide and local errors for weekly and monthly averaged ice concentrations were reduced by almost 50% from daily values. The accuracy of the proposed model surpasses currently available physics-based models in the lake-wide ice concentration prediction, offering a promising avenue for enhancing ice prediction and hindcasting in large lakes.
{"title":"A Deep Learning Approach for Modeling and Hindcasting Lake Michigan Ice Cover","authors":"Hazem Abdelhady, Cary Troy","doi":"arxiv-2407.04937","DOIUrl":"https://doi.org/arxiv-2407.04937","url":null,"abstract":"In large lakes, ice cover plays an important role in shipping and navigation,\u0000coastal erosion, regional weather and climate, and aquatic ecosystem function.\u0000In this study, a novel deep learning model for ice cover concentration\u0000prediction in Lake Michigan is introduced. The model uses hindcasted\u0000meteorological variables, water depth, and shoreline proximity as inputs, and\u0000NOAA ice charts for training, validation, and testing. The proposed framework\u0000leverages Convolution Long Short-Term Memory (ConvLSTM) and Convolution Neural\u0000Network (CNN) to capture both spatial and temporal dependencies between model\u0000input and output to simulate daily ice cover at 0.1{deg} resolution. The model\u0000performance was assessed through lake-wide average metrics and local error\u0000metrics, with detailed evaluations conducted at six distinct locations in Lake\u0000Michigan. The results demonstrated a high degree of agreement between the\u0000model's predictions and ice charts, with an average RMSE of 0.029 for the daily\u0000lake-wide average ice concentration. Local daily prediction errors were\u0000greater, with an average RMSE of 0.102. Lake-wide and local errors for weekly\u0000and monthly averaged ice concentrations were reduced by almost 50% from daily\u0000values. The accuracy of the proposed model surpasses currently available\u0000physics-based models in the lake-wide ice concentration prediction, offering a\u0000promising avenue for enhancing ice prediction and hindcasting in large lakes.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we investigate the effect of reservoir computing training data on the reconstruction of chaotic dynamics. Our findings indicate that a training time series comprising a few periodic orbits of low periods can successfully reconstruct the Lorenz attractor. We also demonstrate that biased training data does not negatively impact reconstruction success. Our method's ability to reconstruct a physical measure is much better than the so-called cycle expansion approach, which relies on weighted averaging. Additionally, we demonstrate that fixed point attractors and chaotic transients can be accurately reconstructed by a model trained from a few periodic orbits, even when using different parameters.
{"title":"Data-driven modeling from biased small training data using periodic orbits","authors":"Kengo Nakai, Yoshitaka Saiki","doi":"arxiv-2407.06229","DOIUrl":"https://doi.org/arxiv-2407.06229","url":null,"abstract":"In this study, we investigate the effect of reservoir computing training data\u0000on the reconstruction of chaotic dynamics. Our findings indicate that a\u0000training time series comprising a few periodic orbits of low periods can\u0000successfully reconstruct the Lorenz attractor. We also demonstrate that biased\u0000training data does not negatively impact reconstruction success. Our method's\u0000ability to reconstruct a physical measure is much better than the so-called\u0000cycle expansion approach, which relies on weighted averaging. Additionally, we\u0000demonstrate that fixed point attractors and chaotic transients can be\u0000accurately reconstructed by a model trained from a few periodic orbits, even\u0000when using different parameters.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The true power of computational research typically can lay in either what it accomplishes or what it enables others to accomplish. In this work, both avenues are simultaneously embraced across several distinct efforts existing at three general scales of abstractions of what a material is - atomistic, physical, and design. At each, an efficient materials informatics infrastructure is being built from the ground up based on (1) the fundamental understanding of the underlying prior knowledge, including the data, (2) deployment routes that take advantage of it, and (3) pathways to extend it in an autonomous or semi-autonomous fashion, while heavily relying on artificial intelligence (AI) to guide well-established DFT-based ab initio and CALPHAD-based thermodynamic methods. The resulting multi-level discovery infrastructure is highly generalizable as it focuses on encoding problems to solve them easily rather than looking for an existing solution. To showcase it, this dissertation discusses the design of multi-alloy functionally graded materials (FGMs) incorporating ultra-high temperature refractory high entropy alloys (RHEAs) towards gas turbine and jet engine efficiency increase reducing CO2 emissions, as well as hypersonic vehicles. It leverages a new graph representation of underlying mathematical space using a newly developed algorithm based on combinatorics, not subject to many problems troubling the community. Underneath, property models and phase relations are learned from optimized samplings of the largest and highest quality dataset of HEA in the world, called ULTERA. At the atomistic level, a data ecosystem optimized for machine learning (ML) from over 4.5 million relaxed structures, called MPDD, is used to inform experimental observations and improve thermodynamic models by providing stability data enabled by a new efficient featurization framework.
计算研究的真正威力通常在于它所完成的工作或它能帮助他人完成的工作。在这项工作中,这两个方面同时贯穿于几项不同的工作中,这些工作存在于对材料的原子、物理和设计这三个一般抽象尺度上。每一项工作都从头开始构建高效的材料信息学基础设施,其基础是:(1) 对包括数据在内的先验知识的基本理解;(2) 利用先验知识的部署路线;(3) 以自主或半自主方式扩展先验知识的途径,同时在很大程度上依赖人工智能(AI)来指导成熟的基于 DFT 的 ab initio 和基于 CALPHAD 的热力学方法。由此产生的多层次发现基础架构具有很强的通用性,因为它侧重于对问题进行编码以轻松解决问题,而不是寻找已有的解决方案。为了展示这一点,本论文讨论了结合超高温难熔高熵合金(RHEAs)的多合金功能分级材料(FGMs)的设计,以提高燃气轮机和喷气发动机的效率,减少二氧化碳排放,以及高超音速飞行器。它利用新开发的基于组合学的算法,对底层数学空间进行了全新的图形表示,从而避免了许多困扰业界的问题。在此基础上,通过对世界上最大、质量最高的 HEA 数据集(ULTERA)进行优化采样,学习属性模型和相位关系。在原子水平上,从 450 多万个松弛结构中优化出的机器学习(ML)数据生态系统(称为 MPDD)被用来为实验观察提供信息,并通过新的高效特征化框架提供稳定性数据来改进热力学模型。
{"title":"Efficient Materials Informatics between Rockets and Electrons","authors":"Adam M. Krajewski","doi":"arxiv-2407.04648","DOIUrl":"https://doi.org/arxiv-2407.04648","url":null,"abstract":"The true power of computational research typically can lay in either what it\u0000accomplishes or what it enables others to accomplish. In this work, both\u0000avenues are simultaneously embraced across several distinct efforts existing at\u0000three general scales of abstractions of what a material is - atomistic,\u0000physical, and design. At each, an efficient materials informatics\u0000infrastructure is being built from the ground up based on (1) the fundamental\u0000understanding of the underlying prior knowledge, including the data, (2)\u0000deployment routes that take advantage of it, and (3) pathways to extend it in\u0000an autonomous or semi-autonomous fashion, while heavily relying on artificial\u0000intelligence (AI) to guide well-established DFT-based ab initio and\u0000CALPHAD-based thermodynamic methods. The resulting multi-level discovery infrastructure is highly generalizable as\u0000it focuses on encoding problems to solve them easily rather than looking for an\u0000existing solution. To showcase it, this dissertation discusses the design of\u0000multi-alloy functionally graded materials (FGMs) incorporating ultra-high\u0000temperature refractory high entropy alloys (RHEAs) towards gas turbine and jet\u0000engine efficiency increase reducing CO2 emissions, as well as hypersonic\u0000vehicles. It leverages a new graph representation of underlying mathematical\u0000space using a newly developed algorithm based on combinatorics, not subject to\u0000many problems troubling the community. Underneath, property models and phase\u0000relations are learned from optimized samplings of the largest and highest\u0000quality dataset of HEA in the world, called ULTERA. At the atomistic level, a\u0000data ecosystem optimized for machine learning (ML) from over 4.5 million\u0000relaxed structures, called MPDD, is used to inform experimental observations\u0000and improve thermodynamic models by providing stability data enabled by a new\u0000efficient featurization framework.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ramirez-Morales, A. Gutiérrez-Rodríguez, T. Cisneros-Pérez, H. Garcia-Tecocoatzi, A. Dávila-Rivera
In this article, we explore machine learning techniques using support vector machines with two novel approaches: exotic and physics-informed support vector machines. Exotic support vector machines employ unconventional techniques such as genetic algorithms and boosting. Physics-informed support vector machines integrate the physics dynamics of a given high-energy physics process in a straightforward manner. The goal is to efficiently distinguish signal and background events in high-energy physics collision data. To test our algorithms, we perform computational experiments with simulated Drell-Yan events in proton-proton collisions. Our results highlight the superiority of the physics-informed support vector machines, emphasizing their potential in high-energy physics and promoting the inclusion of physics information in machine learning algorithms for future research.
{"title":"Exotic and physics-informed support vector machines for high energy physics","authors":"A. Ramirez-Morales, A. Gutiérrez-Rodríguez, T. Cisneros-Pérez, H. Garcia-Tecocoatzi, A. Dávila-Rivera","doi":"arxiv-2407.03538","DOIUrl":"https://doi.org/arxiv-2407.03538","url":null,"abstract":"In this article, we explore machine learning techniques using support vector\u0000machines with two novel approaches: exotic and physics-informed support vector\u0000machines. Exotic support vector machines employ unconventional techniques such\u0000as genetic algorithms and boosting. Physics-informed support vector machines\u0000integrate the physics dynamics of a given high-energy physics process in a\u0000straightforward manner. The goal is to efficiently distinguish signal and\u0000background events in high-energy physics collision data. To test our\u0000algorithms, we perform computational experiments with simulated Drell-Yan\u0000events in proton-proton collisions. Our results highlight the superiority of\u0000the physics-informed support vector machines, emphasizing their potential in\u0000high-energy physics and promoting the inclusion of physics information in\u0000machine learning algorithms for future research.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finding appropriate reaction conditions that yield high product rates in chemical synthesis is crucial for the chemical and pharmaceutical industries. However, due to the vast chemical space, conducting experiments for each possible reaction condition is impractical. Consequently, models such as QSAR (Quantitative Structure-Activity Relationship) or ML (Machine Learning) have been developed to predict the outcomes of reactions and illustrate how reaction conditions affect product yield. Despite these advancements, inferring all possible combinations remains computationally prohibitive when using a conventional CPU. In this work, we explore using a Digital Annealing Unit (DAU) to tackle these large-scale optimization problems more efficiently by solving Quadratic Unconstrained Binary Optimization (QUBO). Two types of QUBO models are constructed in this work: one using quantum annealing and the other using ML. Both models are built and tested on four high-throughput experimentation (HTE) datasets and selected Reaxys datasets. Our results suggest that the performance of models is comparable to classical ML methods (i.e., Random Forest and Multilayer Perceptron (MLP)), while the inference time of our models requires only seconds with a DAU. Additionally, in campaigns involving active learning and autonomous design of reaction conditions to achieve higher reaction yield, our model demonstrates significant improvements by adding new data, showing promise of adopting our method in the iterative nature of such problem settings. Our method can also accelerate the screening of billions of reaction conditions, achieving speeds millions of times faster than traditional computing units in identifying superior conditions. Therefore, leveraging the DAU with our developed QUBO models has the potential to be a valuable tool for innovative chemical synthesis.
寻找合适的反应条件,在化学合成中获得高产率,对于化学和制药行业至关重要。然而,由于化学空间巨大,对每种可能的反应条件进行实验是不切实际的。因此,人们开发了 QSAR(定量结构-活性关系)或 ML(机器学习)等模型来预测反应结果,并说明反应条件如何影响产物产量。尽管取得了这些进步,但在使用传统 CPU 时,推断所有可能的组合仍然耗费大量计算资源。在这项工作中,我们探索使用数字退火单元(DAU),通过求解二次无约束二元优化(QUBO),更高效地解决这些大规模优化问题。本文构建了两种 QUBO 模型:一种使用量子退火,另一种使用ML。我们在四个高通量实验(HTE)数据集和选定的 Reaxys 数据集上构建并测试了这两种模型。我们的结果表明,模型的性能可与经典的 ML 方法(即随机森林和多层感知器 (MLP))相媲美,而我们模型的推理时间只需要 DAU 的几秒钟。此外,在涉及主动学习和自主设计反应条件以获得更高的反应产率的活动中,我们的模型通过添加新数据实现了显著的改进,这表明在此类问题设置的迭代性质中采用我们的方法大有可为。我们的方法还能加速筛选数十亿个反应条件,在识别优越条件方面的速度比传统计算单元快数百万倍。因此,利用 DAU 和我们开发的 QUBO 模型有可能成为创新化学合成的重要工具。
{"title":"Application of the Digital Annealer Unit in Optimizing Chemical Reaction Conditions for Enhanced Production Yields","authors":"Shih-Cheng Li, Pei-Hwa Wang, Jheng-Wei Su, Wei-Yin Chiang, Shih-Hsien Huang, Yen-Chu Lin, Chia-Ho Ou, Chih-Yu Chen","doi":"arxiv-2407.17485","DOIUrl":"https://doi.org/arxiv-2407.17485","url":null,"abstract":"Finding appropriate reaction conditions that yield high product rates in\u0000chemical synthesis is crucial for the chemical and pharmaceutical industries.\u0000However, due to the vast chemical space, conducting experiments for each\u0000possible reaction condition is impractical. Consequently, models such as QSAR\u0000(Quantitative Structure-Activity Relationship) or ML (Machine Learning) have\u0000been developed to predict the outcomes of reactions and illustrate how reaction\u0000conditions affect product yield. Despite these advancements, inferring all\u0000possible combinations remains computationally prohibitive when using a\u0000conventional CPU. In this work, we explore using a Digital Annealing Unit (DAU)\u0000to tackle these large-scale optimization problems more efficiently by solving\u0000Quadratic Unconstrained Binary Optimization (QUBO). Two types of QUBO models\u0000are constructed in this work: one using quantum annealing and the other using\u0000ML. Both models are built and tested on four high-throughput experimentation\u0000(HTE) datasets and selected Reaxys datasets. Our results suggest that the\u0000performance of models is comparable to classical ML methods (i.e., Random\u0000Forest and Multilayer Perceptron (MLP)), while the inference time of our models\u0000requires only seconds with a DAU. Additionally, in campaigns involving active\u0000learning and autonomous design of reaction conditions to achieve higher\u0000reaction yield, our model demonstrates significant improvements by adding new\u0000data, showing promise of adopting our method in the iterative nature of such\u0000problem settings. Our method can also accelerate the screening of billions of\u0000reaction conditions, achieving speeds millions of times faster than traditional\u0000computing units in identifying superior conditions. Therefore, leveraging the\u0000DAU with our developed QUBO models has the potential to be a valuable tool for\u0000innovative chemical synthesis.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"140 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}