Pub Date : 2024-08-23DOI: 10.1088/2632-2153/ad6d29
Borja Bayón-Buján, Mario Merino
An algorithm to obtain data-driven models of oscillatory phenomena in plasma space propulsion systems is presented, based on sparse regression (SINDy) and Pareto front analysis. The algorithm can incorporate physical constraints, use data bootstrapping for additional robustness, and fine-tuning to different metrics. Standard, weak and integral SINDy formulations are discussed and compared. The scheme is benchmarked for the case of breathing-mode oscillations in Hall effect thrusters, using particle-in-cell/fluid simulation data. Models of varying complexity are obtained for the average plasma properties, and shown to have a clear physical interpretability and agreement with existing 0D models in the literature. Lastly, the algorithm applied is also shown to enable the identification of physical subdomains with qualitatively different plasma dynamics, providing valuable information for more advanced modeling approaches.
{"title":"Data-driven sparse modeling of oscillations in plasma space propulsion","authors":"Borja Bayón-Buján, Mario Merino","doi":"10.1088/2632-2153/ad6d29","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6d29","url":null,"abstract":"An algorithm to obtain data-driven models of oscillatory phenomena in plasma space propulsion systems is presented, based on sparse regression (SINDy) and Pareto front analysis. The algorithm can incorporate physical constraints, use data bootstrapping for additional robustness, and fine-tuning to different metrics. Standard, weak and integral SINDy formulations are discussed and compared. The scheme is benchmarked for the case of breathing-mode oscillations in Hall effect thrusters, using particle-in-cell/fluid simulation data. Models of varying complexity are obtained for the average plasma properties, and shown to have a clear physical interpretability and agreement with existing 0D models in the literature. Lastly, the algorithm applied is also shown to enable the identification of physical subdomains with qualitatively different plasma dynamics, providing valuable information for more advanced modeling approaches.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"11 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1088/2632-2153/ad6feb
Zachary R Fox, Ayana Ghosh
Predicting and enhancing inherent properties based on molecular structures is paramount to design tasks in medicine, materials science, and environmental management. Most of the current machine learning and deep learning approaches have become standard for predictions, but they face challenges when applied across different datasets due to reliance on correlations between molecular representation and target properties. These approaches typically depend on large datasets to capture the diversity within the chemical space, facilitating a more accurate approximation, interpolation, or extrapolation of the chemical behavior of molecules. In our research, we introduce an active learning approach that discerns underlying cause-effect relationships through strategic sampling with the use of a graph loss function. This method identifies the smallest subset of the dataset capable of encoding the most information representative of a much larger chemical space. The identified causal relations are then leveraged to conduct systematic interventions, optimizing the design task within a chemical space that the models have not encountered previously. While our implementation focused on the QM9 quantum-chemical dataset for a specific design task—finding molecules with a large dipole moment—our active causal learning approach, driven by intelligent sampling and interventions, holds potential for broader applications in molecular, materials design and discovery.
{"title":"Active causal learning for decoding chemical complexities with targeted interventions","authors":"Zachary R Fox, Ayana Ghosh","doi":"10.1088/2632-2153/ad6feb","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6feb","url":null,"abstract":"Predicting and enhancing inherent properties based on molecular structures is paramount to design tasks in medicine, materials science, and environmental management. Most of the current machine learning and deep learning approaches have become standard for predictions, but they face challenges when applied across different datasets due to reliance on correlations between molecular representation and target properties. These approaches typically depend on large datasets to capture the diversity within the chemical space, facilitating a more accurate approximation, interpolation, or extrapolation of the chemical behavior of molecules. In our research, we introduce an active learning approach that discerns underlying cause-effect relationships through strategic sampling with the use of a graph loss function. This method identifies the smallest subset of the dataset capable of encoding the most information representative of a much larger chemical space. The identified causal relations are then leveraged to conduct systematic interventions, optimizing the design task within a chemical space that the models have not encountered previously. While our implementation focused on the QM9 quantum-chemical dataset for a specific design task—finding molecules with a large dipole moment—our active causal learning approach, driven by intelligent sampling and interventions, holds potential for broader applications in molecular, materials design and discovery.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"68 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1088/2632-2153/ad5f73
Samuel Tovey, Christoph Lohrmann, Christian Holm
Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners’ training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.
{"title":"Emergence of chemotactic strategies with multi-agent reinforcement learning","authors":"Samuel Tovey, Christoph Lohrmann, Christian Holm","doi":"10.1088/2632-2153/ad5f73","DOIUrl":"https://doi.org/10.1088/2632-2153/ad5f73","url":null,"abstract":"Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners’ training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"97 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1088/2632-2153/ad6be8
Hyeondo Oh, Daniel K Park
Anomaly detection is a critical problem in data analysis and pattern recognition, finding applications in various domains. We introduce quantum support vector data description (QSVDD), an unsupervised learning algorithm designed for anomaly detection. QSVDD utilizes a shallow-depth quantum circuit to learn a minimum-volume hypersphere that tightly encloses normal data, tailored for the constraints of noisy intermediate-scale quantum (NISQ) computing. Simulation results on the MNIST and Fashion MNIST image datasets, as well as credit card fraud detection, demonstrate that QSVDD outperforms both quantum autoencoder and deep learning-based approaches under similar training conditions. Notably, QSVDD requires an extremely small number of model parameters, which increases logarithmically with the number of input qubits. This enables efficient learning with a simple training landscape, presenting a compact quantum machine learning model with strong performance for anomaly detection.
{"title":"Quantum support vector data description for anomaly detection","authors":"Hyeondo Oh, Daniel K Park","doi":"10.1088/2632-2153/ad6be8","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6be8","url":null,"abstract":"Anomaly detection is a critical problem in data analysis and pattern recognition, finding applications in various domains. We introduce quantum support vector data description (QSVDD), an unsupervised learning algorithm designed for anomaly detection. QSVDD utilizes a shallow-depth quantum circuit to learn a minimum-volume hypersphere that tightly encloses normal data, tailored for the constraints of noisy intermediate-scale quantum (NISQ) computing. Simulation results on the MNIST and Fashion MNIST image datasets, as well as credit card fraud detection, demonstrate that QSVDD outperforms both quantum autoencoder and deep learning-based approaches under similar training conditions. Notably, QSVDD requires an extremely small number of model parameters, which increases logarithmically with the number of input qubits. This enables efficient learning with a simple training landscape, presenting a compact quantum machine learning model with strong performance for anomaly detection.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"65 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1088/2632-2153/ad6ca0
Gerhard Jung, Giulio Biroli, Ludovic Berthier
Normalizing flows can transform a simple prior probability distribution into a more complex target distribution. Here, we evaluate the ability and efficiency of generative machine learning methods to sample the Boltzmann distribution of an atomistic model for glass-forming liquids. This is a notoriously difficult task, as it amounts to ergodically exploring the complex free energy landscape of a disordered and frustrated many-body system. We optimize a normalizing flow model to successfully transform high-temperature configurations of a dense liquid into low-temperature ones, near the glass transition. We perform a detailed comparative analysis with established enhanced sampling techniques developed in the physics literature to assess and rank the performance of normalizing flows against state-of-the-art algorithms. We demonstrate that machine learning methods are very promising, showing a large speedup over conventional molecular dynamics. Normalizing flows show performances comparable to parallel tempering and population annealing, while still falling far behind the swap Monte Carlo algorithm. Our study highlights the potential of generative machine learning models in scientific computing for complex systems, but also points to some of its current limitations and the need for further improvement.
{"title":"Normalizing flows as an enhanced sampling method for atomistic supercooled liquids","authors":"Gerhard Jung, Giulio Biroli, Ludovic Berthier","doi":"10.1088/2632-2153/ad6ca0","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6ca0","url":null,"abstract":"Normalizing flows can transform a simple prior probability distribution into a more complex target distribution. Here, we evaluate the ability and efficiency of generative machine learning methods to sample the Boltzmann distribution of an atomistic model for glass-forming liquids. This is a notoriously difficult task, as it amounts to ergodically exploring the complex free energy landscape of a disordered and frustrated many-body system. We optimize a normalizing flow model to successfully transform high-temperature configurations of a dense liquid into low-temperature ones, near the glass transition. We perform a detailed comparative analysis with established enhanced sampling techniques developed in the physics literature to assess and rank the performance of normalizing flows against state-of-the-art algorithms. We demonstrate that machine learning methods are very promising, showing a large speedup over conventional molecular dynamics. Normalizing flows show performances comparable to parallel tempering and population annealing, while still falling far behind the swap Monte Carlo algorithm. Our study highlights the potential of generative machine learning models in scientific computing for complex systems, but also points to some of its current limitations and the need for further improvement.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"12 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1088/2632-2153/ad6be7
Finn H O’Shea, Semin Joung, David R Smith, Daniel Ratner, Ryan Coffee
Using supervised learning to train a machine learning model to predict an on-coming edge localized mode (ELM) requires a large number of labeled samples. Creating an appropriate data set from the very large database of discharges at a long-running tokamak, such as DIII-D, would be a very time-consuming process for a human. Considering this need and difficulty, we use coincidence anomaly detection, an unsupervised learning technique, to train an ELM-identifier to identify and label ELMs in the DIII-D discharge database. This ELM-identifier shows, simultaneously, a precision of 0.68 and a recall of 0.63 (AUC is 0.73) on identifying ELMs in example time series pulled from thousands of discharges spanning five years. In a test set of 50 discharges, the algorithm finds over 26 thousand ELM candidates, more than 5 times the existing catalog of ELMs labeled by humans.
使用监督学习来训练机器学习模型,以预测即将发生的边缘局部模式(ELM),需要大量的标记样本。从 DIII-D 等长期运行的托卡马克放电的庞大数据库中创建一个适当的数据集,对人类来说是一个非常耗时的过程。考虑到这一需求和困难,我们使用了巧合异常检测(一种无监督学习技术)来训练 ELM 识别器,以识别和标记 DIII-D 放电数据库中的 ELM。该 ELM 识别器同时显示,在从跨越五年的数千个出院数据中提取的示例时间序列中识别 ELM 的精确度为 0.68,召回率为 0.63(AUC 为 0.73)。在一个包含 50 个出院数据的测试集中,该算法发现了超过 2.6 万个 ELM 候选,是现有人工标注 ELM 目录的 5 倍多。
{"title":"Coincidence anomaly detection for unsupervised locating of edge localized modes in the DIII-D tokamak dataset","authors":"Finn H O’Shea, Semin Joung, David R Smith, Daniel Ratner, Ryan Coffee","doi":"10.1088/2632-2153/ad6be7","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6be7","url":null,"abstract":"Using supervised learning to train a machine learning model to predict an on-coming edge localized mode (ELM) requires a large number of labeled samples. Creating an appropriate data set from the very large database of discharges at a long-running tokamak, such as DIII-D, would be a very time-consuming process for a human. Considering this need and difficulty, we use coincidence anomaly detection, an unsupervised learning technique, to train an ELM-identifier to identify and label ELMs in the DIII-D discharge database. This ELM-identifier shows, simultaneously, a precision of 0.68 and a recall of 0.63 (AUC is 0.73) on identifying ELMs in example time series pulled from thousands of discharges spanning five years. In a test set of 50 discharges, the algorithm finds over 26 thousand ELM candidates, more than 5 times the existing catalog of ELMs labeled by humans.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"27 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1088/2632-2153/ad652d
Inbar Seroussi, Asaf Miron, Zohar Ringel
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations. As in many other deep learning approaches, the choice of PINN design and training protocol requires careful craftsmanship. Here, we suggest a comprehensive theoretical framework that sheds light on this important problem. Leveraging an equivalence between infinitely over-parameterized neural networks and Gaussian process regression, we derive an integro-differential equation that governs PINN prediction in the large data-set limit—the neurally-informed equation. This equation augments the original one by a kernel term reflecting architecture choices. It allows quantifying implicit bias induced by the network via a spectral decomposition of the source term in the original differential equation.
{"title":"Spectral-bias and kernel-task alignment in physically informed neural networks","authors":"Inbar Seroussi, Asaf Miron, Zohar Ringel","doi":"10.1088/2632-2153/ad652d","DOIUrl":"https://doi.org/10.1088/2632-2153/ad652d","url":null,"abstract":"Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations. As in many other deep learning approaches, the choice of PINN design and training protocol requires careful craftsmanship. Here, we suggest a comprehensive theoretical framework that sheds light on this important problem. Leveraging an equivalence between infinitely over-parameterized neural networks and Gaussian process regression, we derive an integro-differential equation that governs PINN prediction in the large data-set limit—the neurally-informed equation. This equation augments the original one by a kernel term reflecting architecture choices. It allows quantifying implicit bias induced by the network via a spectral decomposition of the source term in the original differential equation.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"398 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1088/2632-2153/ad6be6
Suresh Bishnoi, Ravinder Bhattoo, Jayadeva3jayadeva@ee.iitd.ac.in, Sayan Ranu, N M Anoop Krishnan
The time evolution of physical systems is described by differential equations, which depend on abstract quantities like energy and force. Traditionally, these quantities are derived as functionals based on observables such as positions and velocities. Discovering these governing symbolic laws is the key to comprehending the interactions in nature. Here, we present a Hamiltonian graph neural network (Hgnn), a physics-enforced Gnn that learns the dynamics of systems directly from their trajectory. We demonstrate the performance of Hgnn on