Sebastian Rodriguez, Marc Rébillat, Shweta Paunikar, Pierre Margerit, Eric Monteiro, Francisco Chinesta, Nazih Mechbal
Structural Health Monitoring (SHM) aims to monitor in real time the health state of engineering structures. For thin structures, Lamb Waves (LW) are very efficient for SHM purposes. A bonded piezoelectric transducer (PZT) emits LW in the structure in the form of a short tone burst. This initial wave packet (IWP) propagates in the structure and interacts with its boundaries and discontinuities and with eventual damages generating additional wave packets. The main issues with LW based SHM are that at least two LW modes are simultaneously excited and that those modes are dispersive. Matching Pursuit Method (MPM), which consists of approximating a signal as a sum of different delayed and scaled atoms taken from an a priori known learning dictionary, seems very appealing in such a context, however is limited to nondispersive signals and relies on a priori known dictionary. An improved version of MPM called the Single Atom Convolutional Matching Pursuit method (SACMPM), which addresses the dispersion phenomena by decomposing a measured signal as delayed and dispersed atoms and limits the learning dictionary to only one atom, is proposed here. Its performances are illustrated when dealing with numerical and experimental signals as well as its usage for damage detection. Although the signal approximation method proposed in this paper finds an original application in the context of SHM, this method remains completely general and can be easily applied to any signal processing problem.
{"title":"Single Atom Convolutional Matching Pursuit: Theoretical Framework and Application to Lamb Waves based Structural Health Monitoring","authors":"Sebastian Rodriguez, Marc Rébillat, Shweta Paunikar, Pierre Margerit, Eric Monteiro, Francisco Chinesta, Nazih Mechbal","doi":"arxiv-2408.08929","DOIUrl":"https://doi.org/arxiv-2408.08929","url":null,"abstract":"Structural Health Monitoring (SHM) aims to monitor in real time the health\u0000state of engineering structures. For thin structures, Lamb Waves (LW) are very\u0000efficient for SHM purposes. A bonded piezoelectric transducer (PZT) emits LW in\u0000the structure in the form of a short tone burst. This initial wave packet (IWP)\u0000propagates in the structure and interacts with its boundaries and\u0000discontinuities and with eventual damages generating additional wave packets.\u0000The main issues with LW based SHM are that at least two LW modes are\u0000simultaneously excited and that those modes are dispersive. Matching Pursuit\u0000Method (MPM), which consists of approximating a signal as a sum of different\u0000delayed and scaled atoms taken from an a priori known learning dictionary,\u0000seems very appealing in such a context, however is limited to nondispersive\u0000signals and relies on a priori known dictionary. An improved version of MPM\u0000called the Single Atom Convolutional Matching Pursuit method (SACMPM), which\u0000addresses the dispersion phenomena by decomposing a measured signal as delayed\u0000and dispersed atoms and limits the learning dictionary to only one atom, is\u0000proposed here. Its performances are illustrated when dealing with numerical and\u0000experimental signals as well as its usage for damage detection. Although the\u0000signal approximation method proposed in this paper finds an original\u0000application in the context of SHM, this method remains completely general and\u0000can be easily applied to any signal processing problem.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanchu Wang, Junhao Ran, Ruixiang Tang, Chia-Yuan Chang, Chia-Yuan Chang, Yu-Neng Chuang, Zirui Liu, Vladimir Braverman, Zhandong Liu, Xia Hu
Despite the impressive capabilities of Large Language Models (LLMs) in general medical domains, questions remain about their performance in diagnosing rare diseases. To answer this question, we aim to assess the diagnostic performance of LLMs in rare diseases, and explore methods to enhance their effectiveness in this area. In this work, we introduce a rare disease question-answering (ReDis-QA) dataset to evaluate the performance of LLMs in diagnosing rare diseases. Specifically, we collected 1360 high-quality question-answer pairs within the ReDis-QA dataset, covering 205 rare diseases. Additionally, we annotated meta-data for each question, facilitating the extraction of subsets specific to any given disease and its property. Based on the ReDis-QA dataset, we benchmarked several open-source LLMs, revealing that diagnosing rare diseases remains a significant challenge for these models. To facilitate retrieval augmentation generation for rare disease diagnosis, we collect the first rare diseases corpus (ReCOP), sourced from the National Organization for Rare Disorders (NORD) database. Specifically, we split the report of each rare disease into multiple chunks, each representing a different property of the disease, including their overview, symptoms, causes, effects, related disorders, diagnosis, and standard therapies. This structure ensures that the information within each chunk aligns consistently with a question. Experiment results demonstrate that ReCOP can effectively improve the accuracy of LLMs on the ReDis-QA dataset by an average of 8%. Moreover, it significantly guides LLMs to generate trustworthy answers and explanations that can be traced back to existing literature.
{"title":"Assessing and Enhancing Large Language Models in Rare Disease Question-answering","authors":"Guanchu Wang, Junhao Ran, Ruixiang Tang, Chia-Yuan Chang, Chia-Yuan Chang, Yu-Neng Chuang, Zirui Liu, Vladimir Braverman, Zhandong Liu, Xia Hu","doi":"arxiv-2408.08422","DOIUrl":"https://doi.org/arxiv-2408.08422","url":null,"abstract":"Despite the impressive capabilities of Large Language Models (LLMs) in\u0000general medical domains, questions remain about their performance in diagnosing\u0000rare diseases. To answer this question, we aim to assess the diagnostic\u0000performance of LLMs in rare diseases, and explore methods to enhance their\u0000effectiveness in this area. In this work, we introduce a rare disease\u0000question-answering (ReDis-QA) dataset to evaluate the performance of LLMs in\u0000diagnosing rare diseases. Specifically, we collected 1360 high-quality\u0000question-answer pairs within the ReDis-QA dataset, covering 205 rare diseases.\u0000Additionally, we annotated meta-data for each question, facilitating the\u0000extraction of subsets specific to any given disease and its property. Based on\u0000the ReDis-QA dataset, we benchmarked several open-source LLMs, revealing that\u0000diagnosing rare diseases remains a significant challenge for these models. To facilitate retrieval augmentation generation for rare disease diagnosis,\u0000we collect the first rare diseases corpus (ReCOP), sourced from the National\u0000Organization for Rare Disorders (NORD) database. Specifically, we split the\u0000report of each rare disease into multiple chunks, each representing a different\u0000property of the disease, including their overview, symptoms, causes, effects,\u0000related disorders, diagnosis, and standard therapies. This structure ensures\u0000that the information within each chunk aligns consistently with a question.\u0000Experiment results demonstrate that ReCOP can effectively improve the accuracy\u0000of LLMs on the ReDis-QA dataset by an average of 8%. Moreover, it significantly\u0000guides LLMs to generate trustworthy answers and explanations that can be traced\u0000back to existing literature.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenzhong Wang, Haowei Hua, Wanyu Lin, Ming Yang, Kay Chen Tan
Crystalline materials, with their symmetrical and periodic structures, possess a diverse array of properties and have been widely used in various fields, e.g., sustainable development. To discover crystalline materials, traditional experimental and computational approaches are often time-consuming and expensive. In these years, thanks to the explosive amount of crystalline materials data, great interest has been given to data-driven materials discovery. Particularly, recent advancements have exploited the expressive representation ability of deep learning to model the highly complex atomic systems within crystalline materials, opening up new avenues for fast and accurate materials discovery. These works typically focus on four types of tasks, including physicochemical property prediction, crystalline material synthesis, aiding characterization, and force field development; these tasks are essential for scientific research and development in crystalline materials science. Despite the remarkable progress, there is still a lack of systematic research to summarize their correlations, distinctions, and limitations. To fill this gap, we systematically investigated the progress made in deep learning-based material discovery in recent years. We first introduce several data representations of the crystalline materials. Based on the representations, we summarize various fundamental deep learning models and their tailored usages in material discovery tasks. We also point out the remaining challenges and propose several future directions. The main goal of this review is to offer comprehensive and valuable insights and foster progress in the intersection of artificial intelligence and material science.
{"title":"Crystalline Material Discovery in the Era of Artificial Intelligence","authors":"Zhenzhong Wang, Haowei Hua, Wanyu Lin, Ming Yang, Kay Chen Tan","doi":"arxiv-2408.08044","DOIUrl":"https://doi.org/arxiv-2408.08044","url":null,"abstract":"Crystalline materials, with their symmetrical and periodic structures,\u0000possess a diverse array of properties and have been widely used in various\u0000fields, e.g., sustainable development. To discover crystalline materials,\u0000traditional experimental and computational approaches are often time-consuming\u0000and expensive. In these years, thanks to the explosive amount of crystalline\u0000materials data, great interest has been given to data-driven materials\u0000discovery. Particularly, recent advancements have exploited the expressive\u0000representation ability of deep learning to model the highly complex atomic\u0000systems within crystalline materials, opening up new avenues for fast and\u0000accurate materials discovery. These works typically focus on four types of\u0000tasks, including physicochemical property prediction, crystalline material\u0000synthesis, aiding characterization, and force field development; these tasks\u0000are essential for scientific research and development in crystalline materials\u0000science. Despite the remarkable progress, there is still a lack of systematic\u0000research to summarize their correlations, distinctions, and limitations. To\u0000fill this gap, we systematically investigated the progress made in deep\u0000learning-based material discovery in recent years. We first introduce several\u0000data representations of the crystalline materials. Based on the\u0000representations, we summarize various fundamental deep learning models and\u0000their tailored usages in material discovery tasks. We also point out the\u0000remaining challenges and propose several future directions. The main goal of\u0000this review is to offer comprehensive and valuable insights and foster progress\u0000in the intersection of artificial intelligence and material science.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This contribution combines a low-rank matrix approximation through Singular Value Decomposition (SVD) with second-order Krylov subspace-based Model Order Reduction (MOR), in order to efficiently propagate input uncertainties through a given vibroacoustic model. The vibroacoustic model consists of a plate coupled to a fluid into which the plate radiates sound due to a turbulent boundary layer excitation. This excitation is subject to uncertainties due to the stochastic nature of the turbulence and the computational cost of simulating the coupled problem with stochastic forcing is very high. The proposed method approximates the output uncertainties in an efficient way, by reducing the evaluation cost of the model in terms of DOFs and samples by using the factors of the SVD low-rank approximation directly as input for the MOR algorithm. Here, the covariance matrix of the vector of unknowns can efficiently be approximated with only a fraction of the original number of evaluations. Therefore, the approach is a promising step to further reducing the computational effort of large-scale vibroacoustic evaluations.
本论文通过奇异值分解(SVD)将低秩矩阵近似与基于二阶克雷洛夫子空间的模型阶次还原(MOR)相结合,以便通过给定的振动声学模型有效传播输入不确定性。振动声学模型包括一个与流体耦合的平板,平板在湍流边界层激励下向流体辐射声音。由于湍流的随机性,这种激励具有不确定性,而模拟具有随机激励的耦合问题的计算成本非常高。所提出的方法通过直接使用 SVD 低阶近似的因子作为 MOR 算法的输入,降低了模型在 DOF 和样本方面的评估成本,从而以一种高效的方式近似了输出的不确定性。在这里,只需原来评估次数的一小部分,就能有效地逼近未知向量的协方差矩阵。因此,该方法有望进一步减少大规模振动声学评估的计算量。
{"title":"Efficient low rank model order reduction of vibroacoustic problems under stochastic loads","authors":"Yannik Hüpel, Ulrich Römer, Matthias Bollhöfer, Sabine Langer","doi":"arxiv-2408.08402","DOIUrl":"https://doi.org/arxiv-2408.08402","url":null,"abstract":"This contribution combines a low-rank matrix approximation through Singular\u0000Value Decomposition (SVD) with second-order Krylov subspace-based Model Order\u0000Reduction (MOR), in order to efficiently propagate input uncertainties through\u0000a given vibroacoustic model. The vibroacoustic model consists of a plate\u0000coupled to a fluid into which the plate radiates sound due to a turbulent\u0000boundary layer excitation. This excitation is subject to uncertainties due to\u0000the stochastic nature of the turbulence and the computational cost of\u0000simulating the coupled problem with stochastic forcing is very high. The\u0000proposed method approximates the output uncertainties in an efficient way, by\u0000reducing the evaluation cost of the model in terms of DOFs and samples by using\u0000the factors of the SVD low-rank approximation directly as input for the MOR\u0000algorithm. Here, the covariance matrix of the vector of unknowns can\u0000efficiently be approximated with only a fraction of the original number of\u0000evaluations. Therefore, the approach is a promising step to further reducing\u0000the computational effort of large-scale vibroacoustic evaluations.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Li, Cillian Hourican, Pashupati P. Mishra, Binisha H. Mishra, Mika Kähönen, Olli T. Raitakari, Reijo Laaksonen, Mika Ala-Korpela, Liisa Keltikangas-Järvinen, Markus Juonala, Terho Lehtimäki, Jos A. Bosch, Rick Quax
There is a significant comorbidity between cardiovascular diseases (CVD) and depression that is highly predictive of poor clinical outcome. Yet, its underlying biological pathways remain challenging to decipher, presumably due to its non-linear associations across multiple mechanisms. Mutual information provides a framework to analyze such intricacies. In this study, we proposed a multipartite projection method based on mutual information correlations to construct multilayer disease networks. We applied the method to a cross-sectional dataset from a wave of the Young Finns Study. This dataset assesses CVD and depression, along with related risk factors and two omics of biomarkers: metabolites and lipids. Instead of directly correlating CVD-related phenotypes and depressive symptoms, we extended the notion of bipartite networks to create a multipartite network that connects these phenotype and symptom variables to intermediate biological variables. Projecting from these intermediate variables results in a weighted multilayer network, where each link between CVD and depression variables is marked by its `layer' (i.e., metabolome or lipidome). Using this projection method, we identified potential mediating biomarkers that connect CVD to depression. These biomarkers thus may play significant roles in the biological pathways of CVD-depression comorbidity. Additionally, the projected network highlights sex and BMI as the most important risk factors, or confounders, associated with the comorbidity. Our method can generalize to any number of omics layers and disease phenotypes, offering a truly system-level overview of biological pathways contributing to comorbidity.
心血管疾病(CVD)与抑郁症之间存在着明显的合并症,这种合并症可高度预测不良的临床结果。然而,其潜在的生物学途径仍然难以破解,这可能是由于它与多种机制之间的非线性关联。互信息为分析这种错综复杂的关系提供了一个框架。在这项研究中,我们提出了一种基于互信息相关性的多方投影方法,以构建多层疾病网络。我们将该方法应用于 "芬兰青年研究"(Young Finns Study)的一个波次的横断面数据集。该数据集评估了心血管疾病和抑郁症,以及相关的风险因素和两个全息生物标志物:代谢物和血脂。我们没有直接将心血管疾病相关表型和抑郁症状联系起来,而是扩展了双分型网络的概念,创建了一个多分型网络,将这些表型和症状变量与中间生物变量联系起来。从这些中间变量进行投影,就会产生一个加权多层网络,其中心血管疾病和抑郁症变量之间的每个链接都以其 "层"(即代谢组或脂质组)为标记。利用这种预测方法,我们确定了连接心血管疾病和抑郁症的潜在中介生物标志物。因此,这些生物标志物可能在心血管疾病-抑郁症的生物学路径中发挥重要作用。我们的方法可以推广到任何数量的omics层和疾病表型,提供了一个真正系统级的导致合并症的生物通路概览。
{"title":"Multilayer Network of Cardiovascular Diseases and Depression via Multipartite Projection","authors":"Jie Li, Cillian Hourican, Pashupati P. Mishra, Binisha H. Mishra, Mika Kähönen, Olli T. Raitakari, Reijo Laaksonen, Mika Ala-Korpela, Liisa Keltikangas-Järvinen, Markus Juonala, Terho Lehtimäki, Jos A. Bosch, Rick Quax","doi":"arxiv-2408.07562","DOIUrl":"https://doi.org/arxiv-2408.07562","url":null,"abstract":"There is a significant comorbidity between cardiovascular diseases (CVD) and\u0000depression that is highly predictive of poor clinical outcome. Yet, its\u0000underlying biological pathways remain challenging to decipher, presumably due\u0000to its non-linear associations across multiple mechanisms. Mutual information\u0000provides a framework to analyze such intricacies. In this study, we proposed a\u0000multipartite projection method based on mutual information correlations to\u0000construct multilayer disease networks. We applied the method to a\u0000cross-sectional dataset from a wave of the Young Finns Study. This dataset\u0000assesses CVD and depression, along with related risk factors and two omics of\u0000biomarkers: metabolites and lipids. Instead of directly correlating CVD-related\u0000phenotypes and depressive symptoms, we extended the notion of bipartite\u0000networks to create a multipartite network that connects these phenotype and\u0000symptom variables to intermediate biological variables. Projecting from these\u0000intermediate variables results in a weighted multilayer network, where each\u0000link between CVD and depression variables is marked by its `layer' (i.e.,\u0000metabolome or lipidome). Using this projection method, we identified potential\u0000mediating biomarkers that connect CVD to depression. These biomarkers thus may\u0000play significant roles in the biological pathways of CVD-depression\u0000comorbidity. Additionally, the projected network highlights sex and BMI as the\u0000most important risk factors, or confounders, associated with the comorbidity.\u0000Our method can generalize to any number of omics layers and disease phenotypes,\u0000offering a truly system-level overview of biological pathways contributing to\u0000comorbidity.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current and future trends in computer hardware, in which the disparity between available flops and memory bandwidth continues to grow, favour algorithm implementations which minimise data movement even at the cost of more flops. In this study we review the requirements for high performance implementations of the kernel independent Fast Multipole Method (kiFMM), a variant of the crucial FMM algorithm for the rapid evaluation of N-body potential problems. Performant implementations of the kiFMM typically rely on Fast Fourier Transforms for the crucial M2L (Multipole-to-Local) operation. However, in recent years for other FMM variants such as the black-box FMM also BLAS based M2L translation operators have become popular that rely on direct matrix compression techniques. In this paper we present algorithmic improvements for BLAS based M2L translation operator and benchmark them against FFT based M2L translation operators. In order to allow a fair comparison we have implemented our own high-performance kiFMM algorithm in Rust that performs competitively against other implementations, and allows us to flexibly switch between BLAS and FFT based translation operators.
{"title":"M2L Translation Operators for Kernel Independent Fast Multipole Methods on Modern Architectures","authors":"Srinath Kailasa, Timo Betcke, Sarah El Kazdadi","doi":"arxiv-2408.07436","DOIUrl":"https://doi.org/arxiv-2408.07436","url":null,"abstract":"Current and future trends in computer hardware, in which the disparity\u0000between available flops and memory bandwidth continues to grow, favour\u0000algorithm implementations which minimise data movement even at the cost of more\u0000flops. In this study we review the requirements for high performance\u0000implementations of the kernel independent Fast Multipole Method (kiFMM), a\u0000variant of the crucial FMM algorithm for the rapid evaluation of N-body\u0000potential problems. Performant implementations of the kiFMM typically rely on\u0000Fast Fourier Transforms for the crucial M2L (Multipole-to-Local) operation.\u0000However, in recent years for other FMM variants such as the black-box FMM also\u0000BLAS based M2L translation operators have become popular that rely on direct\u0000matrix compression techniques. In this paper we present algorithmic\u0000improvements for BLAS based M2L translation operator and benchmark them against\u0000FFT based M2L translation operators. In order to allow a fair comparison we\u0000have implemented our own high-performance kiFMM algorithm in Rust that performs\u0000competitively against other implementations, and allows us to flexibly switch\u0000between BLAS and FFT based translation operators.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study demonstrates whether financial text is useful for tactical asset allocation using stocks by using natural language processing to create polarity indexes in financial news. In this study, we performed clustering of the created polarity indexes using the change-point detection algorithm. In addition, we constructed a stock portfolio and rebalanced it at each change point utilizing an optimization algorithm. Consequently, the asset allocation method proposed in this study outperforms the comparative approach. This result suggests that the polarity index helps construct the equity asset allocation method.
{"title":"SSAAM: Sentiment Signal-based Asset Allocation Method with Causality Information","authors":"Rei Taguchi, Hiroki Sakaji, Kiyoshi Izumi","doi":"arxiv-2408.06585","DOIUrl":"https://doi.org/arxiv-2408.06585","url":null,"abstract":"This study demonstrates whether financial text is useful for tactical asset\u0000allocation using stocks by using natural language processing to create polarity\u0000indexes in financial news. In this study, we performed clustering of the\u0000created polarity indexes using the change-point detection algorithm. In\u0000addition, we constructed a stock portfolio and rebalanced it at each change\u0000point utilizing an optimization algorithm. Consequently, the asset allocation\u0000method proposed in this study outperforms the comparative approach. This result\u0000suggests that the polarity index helps construct the equity asset allocation\u0000method.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is significantly challenging to obtain accurate contact forces in peridynamics (PD) simulations due to the difficulty of surface particles identification, particularly for complex geometries. Here, an improved point-to-surface contact model is proposed for PD with high accuracy. First, the outer surface is identified using the eigenvalue method and then we construct a Verlet list to identify potential contact particle pairs efficiently. Subsequently, a point-to-surface contact search algorithm is utilized to determine precise contact locations with the penalty function method calculating the contact force. Finally, the accuracy of this point-to-surface contact model is validated through several representative contact examples. The results demonstrate that the point-to-surface contact model model can predict contact forces and deformations with high accuracy, aligning well with the classical Hertz contact theory solutions. This work presents a contact model for PD that automatically recognizes external surface particles and accurately calculates the contact force, which provides guidance for the study of multi-body contact as well as complex contact situations.
{"title":"An improved point-to-surface contact algorithm with penalty method for peridynamics","authors":"Haoran Zhang, Lisheng Liu, Xin Lai, Jun Li","doi":"arxiv-2408.06556","DOIUrl":"https://doi.org/arxiv-2408.06556","url":null,"abstract":"It is significantly challenging to obtain accurate contact forces in\u0000peridynamics (PD) simulations due to the difficulty of surface particles\u0000identification, particularly for complex geometries. Here, an improved\u0000point-to-surface contact model is proposed for PD with high accuracy. First,\u0000the outer surface is identified using the eigenvalue method and then we\u0000construct a Verlet list to identify potential contact particle pairs\u0000efficiently. Subsequently, a point-to-surface contact search algorithm is\u0000utilized to determine precise contact locations with the penalty function\u0000method calculating the contact force. Finally, the accuracy of this\u0000point-to-surface contact model is validated through several representative\u0000contact examples. The results demonstrate that the point-to-surface contact\u0000model model can predict contact forces and deformations with high accuracy,\u0000aligning well with the classical Hertz contact theory solutions. This work\u0000presents a contact model for PD that automatically recognizes external surface\u0000particles and accurately calculates the contact force, which provides guidance\u0000for the study of multi-body contact as well as complex contact situations.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip L. Lederer, Xaver Mooslechner, Joachim Schöberl
We present a scalable, high-order implicit large-eddy simulation (ILES) approach for incompressible transitional flows. This method employs the mass-conserving mixed stress (MCS) method for discretizing the Navier-Stokes equations. The MCS method's low dissipation characteristics, combined with the introduced operator-splitting solution technique, result in a high-order solver optimized for efficient and parallel computation of under-resolved turbulent flows. We further enhance the inherent capabilities of the ILES model by incorporating high-order upwind fluxes and are examining its approximation behaviour in transitional aerodynamic flow problems. In this study, we use flows over the Eppler 387 airfoil at Reynolds numbers up to $3 cdot 10^5$ as benchmarks for our simulations.
{"title":"High-order projection-based upwind method for simulation of transitional turbulent flows","authors":"Philip L. Lederer, Xaver Mooslechner, Joachim Schöberl","doi":"arxiv-2408.06698","DOIUrl":"https://doi.org/arxiv-2408.06698","url":null,"abstract":"We present a scalable, high-order implicit large-eddy simulation (ILES)\u0000approach for incompressible transitional flows. This method employs the\u0000mass-conserving mixed stress (MCS) method for discretizing the Navier-Stokes\u0000equations. The MCS method's low dissipation characteristics, combined with the\u0000introduced operator-splitting solution technique, result in a high-order solver\u0000optimized for efficient and parallel computation of under-resolved turbulent\u0000flows. We further enhance the inherent capabilities of the ILES model by\u0000incorporating high-order upwind fluxes and are examining its approximation\u0000behaviour in transitional aerodynamic flow problems. In this study, we use\u0000flows over the Eppler 387 airfoil at Reynolds numbers up to $3 cdot 10^5$ as\u0000benchmarks for our simulations.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elena Natterer, Roman Engelhardt, Sebastian Hörl, Klaus Bogenberger
Rapid urbanization and growing urban populations worldwide present significant challenges for cities, including increased traffic congestion and air pollution. Effective strategies are needed to manage traffic volumes and reduce emissions. In practice, traditional traffic flow simulations are used to test those strategies. However, high computational intensity usually limits their applicability in investigating a magnitude of different scenarios to evaluate best policies. This paper introduces an innovative approach to assess the effects of traffic policies using Graph Neural Networks (GNN). By incorporating complex transport network structures directly into the neural network, this approach could enable rapid testing of various policies without the delays associated with traditional simulations. We provide a proof of concept that GNNs can learn and predict changes in car volume resulting from capacity reduction policies. We train a GNN model based on a training set generated with a MATSim simulation for Paris, France. We analyze the model's performance across different road types and scenarios, finding that the GNN is generally able to learn the effects on edge-based traffic volume induced by policies. The model is especially successful in predicting changes on major streets. Nevertheless, the evaluation also showed that the current model has problems in predicting impacts of spatially small policies and changes in traffic volume in regions where no policy is applied due to spillovers and/or relocation of traffic.
{"title":"Graph Neural Network Approach to Predict the Effects of Road Capacity Reduction Policies: A Case Study for Paris, France","authors":"Elena Natterer, Roman Engelhardt, Sebastian Hörl, Klaus Bogenberger","doi":"arxiv-2408.06762","DOIUrl":"https://doi.org/arxiv-2408.06762","url":null,"abstract":"Rapid urbanization and growing urban populations worldwide present\u0000significant challenges for cities, including increased traffic congestion and\u0000air pollution. Effective strategies are needed to manage traffic volumes and\u0000reduce emissions. In practice, traditional traffic flow simulations are used to\u0000test those strategies. However, high computational intensity usually limits\u0000their applicability in investigating a magnitude of different scenarios to\u0000evaluate best policies. This paper introduces an innovative approach to assess\u0000the effects of traffic policies using Graph Neural Networks (GNN). By\u0000incorporating complex transport network structures directly into the neural\u0000network, this approach could enable rapid testing of various policies without\u0000the delays associated with traditional simulations. We provide a proof of\u0000concept that GNNs can learn and predict changes in car volume resulting from\u0000capacity reduction policies. We train a GNN model based on a training set\u0000generated with a MATSim simulation for Paris, France. We analyze the model's\u0000performance across different road types and scenarios, finding that the GNN is\u0000generally able to learn the effects on edge-based traffic volume induced by\u0000policies. The model is especially successful in predicting changes on major\u0000streets. Nevertheless, the evaluation also showed that the current model has\u0000problems in predicting impacts of spatially small policies and changes in\u0000traffic volume in regions where no policy is applied due to spillovers and/or\u0000relocation of traffic.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}