Georges Dubourg, Zoran Pavlović, Branimir Bajac, Manil Kukkar, Nina Finčur, Zorica Novaković, Marko Radović
The application of metal oxide nanomaterials (MOx NMs) in the agrifood industry offers innovative solutions that can facilitate a paradigm shift in a sector that is currently facing challenges in meeting the growing requirements for food production, while safeguarding the environment from the impacts of current agriculture practices. This review comprehensively illustrates recent advancements and applications of MOx for sustainable practices in the food and agricultural industries and environmental preservation. Relevant published data point out that MOx NMs can be tailored for specific properties, enabling advanced design concepts with improved features for various applications in the agrifood industry. Applications include nano-agrochemical formulation, control of food quality through nanosensors, and smart food packaging. Furthermore, recent research suggests MOx's vital role in addressing environmental challenges by removing toxic elements from contaminated soil and water. This mitigates the environmental effects of widespread agrichemical use and creates a more favorable environment for plant growth. The review also discusses potential barriers, particularly regarding MOx toxicity and risk evaluation. Fundamental concerns about possible adverse effects on human health and the environment must be addressed to establish an appropriate regulatory framework for nano metal oxide-based food and agricultural products.
金属氧化物纳米材料(MOx NMs)在农业食品工业中的应用提供了创新的解决方案,可以促进该行业的模式转变,因为该行业目前正面临着挑战,既要满足日益增长的食品生产需求,又要保护环境免受当前农业实践的影响。本综述全面阐述了 MOx 在食品和农业可持续发展以及环境保护方面的最新进展和应用。已发表的相关数据表明,MOx NMs 可针对特定特性进行定制,从而实现了先进的设计理念,并为食品工业的各种应用提供了更好的功能。应用领域包括纳米农用化学品配方、通过纳米传感器控制食品质量以及智能食品包装。此外,最近的研究表明,MOx 通过去除受污染土壤和水中的有毒元素,在应对环境挑战方面发挥着重要作用。这减轻了广泛使用农用化学品对环境造成的影响,并为植物生长创造了更有利的环境。本综述还讨论了潜在的障碍,特别是 MOx 的毒性和风险评估。必须从根本上解决可能对人类健康和环境造成不利影响的问题,以便为基于纳米金属氧化物的食品和农产品建立适当的监管框架。
{"title":"Advancement of metal oxide nanomaterials on agri-food fronts","authors":"Georges Dubourg, Zoran Pavlović, Branimir Bajac, Manil Kukkar, Nina Finčur, Zorica Novaković, Marko Radović","doi":"arxiv-2407.19776","DOIUrl":"https://doi.org/arxiv-2407.19776","url":null,"abstract":"The application of metal oxide nanomaterials (MOx NMs) in the agrifood\u0000industry offers innovative solutions that can facilitate a paradigm shift in a\u0000sector that is currently facing challenges in meeting the growing requirements\u0000for food production, while safeguarding the environment from the impacts of\u0000current agriculture practices. This review comprehensively illustrates recent\u0000advancements and applications of MOx for sustainable practices in the food and\u0000agricultural industries and environmental preservation. Relevant published data\u0000point out that MOx NMs can be tailored for specific properties, enabling\u0000advanced design concepts with improved features for various applications in the\u0000agrifood industry. Applications include nano-agrochemical formulation, control\u0000of food quality through nanosensors, and smart food packaging. Furthermore,\u0000recent research suggests MOx's vital role in addressing environmental\u0000challenges by removing toxic elements from contaminated soil and water. This\u0000mitigates the environmental effects of widespread agrichemical use and creates\u0000a more favorable environment for plant growth. The review also discusses\u0000potential barriers, particularly regarding MOx toxicity and risk evaluation.\u0000Fundamental concerns about possible adverse effects on human health and the\u0000environment must be addressed to establish an appropriate regulatory framework\u0000for nano metal oxide-based food and agricultural products.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to understand the application of computer technology in financial investment, the author proposes a research on the application of computer technology in financial investment. The author used user transaction data from a certain online payment platform as a sample, with a total of 284908 sample records, including 593 positive samples (fraud samples) and 285214 negative samples (normal samples), to conduct an empirical study on user fraud detection based on data mining. In this process, facing the problem of imbalanced positive and negative samples, the author proposes to use the Under Sampling method to construct sub samples, and then perform feature scaling, outlier detection, feature screening and other processing on the sub samples. Then, four classification models, logistic regression, K-nearest neighbor algorithm, decision tree, and support vector machine, are trained on the processed sub samples. The prediction results of the four models are evaluated, and the results show that the recall rate, Fl score, and AUC value of the logistic regression model are the highest, indicating that the detection method based on computer data mining is practical and feasible.
{"title":"Application of Computer Technology in Financial Investment","authors":"Xinye Sha","doi":"arxiv-2407.19684","DOIUrl":"https://doi.org/arxiv-2407.19684","url":null,"abstract":"In order to understand the application of computer technology in financial\u0000investment, the author proposes a research on the application of computer\u0000technology in financial investment. The author used user transaction data from\u0000a certain online payment platform as a sample, with a total of 284908 sample\u0000records, including 593 positive samples (fraud samples) and 285214 negative\u0000samples (normal samples), to conduct an empirical study on user fraud detection\u0000based on data mining. In this process, facing the problem of imbalanced\u0000positive and negative samples, the author proposes to use the Under Sampling\u0000method to construct sub samples, and then perform feature scaling, outlier\u0000detection, feature screening and other processing on the sub samples. Then,\u0000four classification models, logistic regression, K-nearest neighbor algorithm,\u0000decision tree, and support vector machine, are trained on the processed sub\u0000samples. The prediction results of the four models are evaluated, and the\u0000results show that the recall rate, Fl score, and AUC value of the logistic\u0000regression model are the highest, indicating that the detection method based on\u0000computer data mining is practical and feasible.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"269 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, it becomes a common practice to capture some data of sports games with devices such as GPS sensors and cameras and then use the data to perform various analyses on sports games, including tactics discovery, similar game retrieval, performance study, etc. While this practice has been conducted to many sports such as basketball and soccer, it remains largely unexplored on the billiards sports, which is mainly due to the lack of publicly available datasets. Motivated by this, we collect a dataset of billiards sports, which includes the layouts (i.e., locations) of billiards balls after performing break shots, called break shot layouts, the traces of the balls as a result of strikes (in the form of trajectories), and detailed statistics and performance indicators. We then study and develop techniques for three tasks on the collected dataset, including (1) prediction and (2) generation on the layouts data, and (3) similar billiards layout retrieval on the layouts data, which can serve different users such as coaches, players and fans. We conduct extensive experiments on the collected dataset and the results show that our methods perform effectively and efficiently.
{"title":"Billiards Sports Analytics: Datasets and Tasks","authors":"Qianru Zhang, Zheng Wang, Cheng Long, Siu-Ming Yiu","doi":"arxiv-2407.19686","DOIUrl":"https://doi.org/arxiv-2407.19686","url":null,"abstract":"Nowadays, it becomes a common practice to capture some data of sports games\u0000with devices such as GPS sensors and cameras and then use the data to perform\u0000various analyses on sports games, including tactics discovery, similar game\u0000retrieval, performance study, etc. While this practice has been conducted to\u0000many sports such as basketball and soccer, it remains largely unexplored on the\u0000billiards sports, which is mainly due to the lack of publicly available\u0000datasets. Motivated by this, we collect a dataset of billiards sports, which\u0000includes the layouts (i.e., locations) of billiards balls after performing\u0000break shots, called break shot layouts, the traces of the balls as a result of\u0000strikes (in the form of trajectories), and detailed statistics and performance\u0000indicators. We then study and develop techniques for three tasks on the\u0000collected dataset, including (1) prediction and (2) generation on the layouts\u0000data, and (3) similar billiards layout retrieval on the layouts data, which can\u0000serve different users such as coaches, players and fans. We conduct extensive\u0000experiments on the collected dataset and the results show that our methods\u0000perform effectively and efficiently.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Hossein Nikzad, Mohammad Heidari-Rarani, Mohsen Mirkhalaf
This study presents an innovative application of the Taguchi design of experiment method to optimize the structure of an Artificial Neural Network (ANN) model for the prediction of elastic properties of short fiber reinforced composites. The main goal is to minimize the required computational effort for hyperparameter optimization while enhancing the prediction accuracy. Utilizing a robust design of experiment framework, the structure of an ANN model is optimized. This essentially is the identification of a combination of hyperparameters that yields an optimal predictive accuracy with the fewest algorithmic runs, thereby achieving a significant reduction of the required computational effort. Our findings demonstrate that the Taguchi method not only streamlines the hyperparameter tuning process but also could substantially improve the algorithm's performance. These results underscore the potential of the Taguchi method as a powerful tool for optimizing machine learning algorithms, particularly in scenarios where computational resources are limited. The implications of this study are far-reaching, offering insights for future research in the optimization of different algorithms for improved accuracies and computational efficiencies.
本研究创新性地应用了田口试验设计法来优化人工神经网络(ANN)模型的结构,以预测短纤维增强复合材料的弹性性能。其主要目标是在提高预测精度的同时,最大限度地减少超参数优化所需的计算量。利用稳健的实验设计框架,对 ANN 模型的结构进行优化。这主要是指确定一个参数组合,以最少的算法运行次数获得最佳预测精度,从而显著减少所需的计算工作量。我们的研究结果表明,田口方法不仅简化了超参数调整过程,还能大幅提高算法性能。这些结果凸显了田口方法作为优化机器学习算法的强大工具的潜力,尤其是在计算资源有限的情况下。这项研究意义深远,为今后优化不同算法以提高准确性和计算效率的研究提供了启示。
{"title":"A novel Taguchi-based approach for optimizing neural network architectures: application to elastic short fiber composites","authors":"Mohammad Hossein Nikzad, Mohammad Heidari-Rarani, Mohsen Mirkhalaf","doi":"arxiv-2407.19802","DOIUrl":"https://doi.org/arxiv-2407.19802","url":null,"abstract":"This study presents an innovative application of the Taguchi design of\u0000experiment method to optimize the structure of an Artificial Neural Network\u0000(ANN) model for the prediction of elastic properties of short fiber reinforced\u0000composites. The main goal is to minimize the required computational effort for\u0000hyperparameter optimization while enhancing the prediction accuracy. Utilizing\u0000a robust design of experiment framework, the structure of an ANN model is\u0000optimized. This essentially is the identification of a combination of\u0000hyperparameters that yields an optimal predictive accuracy with the fewest\u0000algorithmic runs, thereby achieving a significant reduction of the required\u0000computational effort. Our findings demonstrate that the Taguchi method not only\u0000streamlines the hyperparameter tuning process but also could substantially\u0000improve the algorithm's performance. These results underscore the potential of\u0000the Taguchi method as a powerful tool for optimizing machine learning\u0000algorithms, particularly in scenarios where computational resources are\u0000limited. The implications of this study are far-reaching, offering insights for\u0000future research in the optimization of different algorithms for improved\u0000accuracies and computational efficiencies.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inferring causal links or subgraphs corresponding to a specific phenotype or label based solely on measured data is an important yet challenging task, which is also different from inferring causal nodes. While Graph Neural Network (GNN) Explainers have shown potential in subgraph identification, existing methods with GNN often offer associative rather than causal insights. This lack of transparency and explainability hinders our understanding of their results and also underlying mechanisms. To address this issue, we propose a novel method of causal link/subgraph inference, called CIDER: Counterfactual-Invariant Diffusion-based GNN ExplaineR, by implementing both counterfactual and diffusion implementations. In other words, it is a model-agnostic and task-agnostic framework for generating causal explanations based on a counterfactual-invariant and diffusion process, which provides not only causal subgraphs due to counterfactual implementation but reliable causal links due to the diffusion process. Specifically, CIDER is first formulated as an inference task that generatively provides the two distributions of one causal subgraph and another spurious subgraph. Then, to enhance the reliability, we further model the CIDER framework as a diffusion process. Thus, using the causal subgraph distribution, we can explicitly quantify the contribution of each subgraph to a phenotype/label in a counterfactual manner, representing each subgraph's causal strength. From a causality perspective, CIDER is an interventional causal method, different from traditional association studies or observational causal approaches, and can also reduce the effects of unobserved confounders. We evaluate CIDER on both synthetic and real-world datasets, which all demonstrate the superiority of CIDER over state-of-the-art methods.
{"title":"CIDER: Counterfactual-Invariant Diffusion-based GNN Explainer for Causal Subgraph Inference","authors":"Qibin Zhang, Chengshang Lyu, Lingxi Chen, Qiqi Jin, Luonan Chen","doi":"arxiv-2407.19376","DOIUrl":"https://doi.org/arxiv-2407.19376","url":null,"abstract":"Inferring causal links or subgraphs corresponding to a specific phenotype or\u0000label based solely on measured data is an important yet challenging task, which\u0000is also different from inferring causal nodes. While Graph Neural Network (GNN)\u0000Explainers have shown potential in subgraph identification, existing methods\u0000with GNN often offer associative rather than causal insights. This lack of\u0000transparency and explainability hinders our understanding of their results and\u0000also underlying mechanisms. To address this issue, we propose a novel method of\u0000causal link/subgraph inference, called CIDER: Counterfactual-Invariant\u0000Diffusion-based GNN ExplaineR, by implementing both counterfactual and\u0000diffusion implementations. In other words, it is a model-agnostic and\u0000task-agnostic framework for generating causal explanations based on a\u0000counterfactual-invariant and diffusion process, which provides not only causal\u0000subgraphs due to counterfactual implementation but reliable causal links due to\u0000the diffusion process. Specifically, CIDER is first formulated as an inference\u0000task that generatively provides the two distributions of one causal subgraph\u0000and another spurious subgraph. Then, to enhance the reliability, we further\u0000model the CIDER framework as a diffusion process. Thus, using the causal\u0000subgraph distribution, we can explicitly quantify the contribution of each\u0000subgraph to a phenotype/label in a counterfactual manner, representing each\u0000subgraph's causal strength. From a causality perspective, CIDER is an\u0000interventional causal method, different from traditional association studies or\u0000observational causal approaches, and can also reduce the effects of unobserved\u0000confounders. We evaluate CIDER on both synthetic and real-world datasets, which\u0000all demonstrate the superiority of CIDER over state-of-the-art methods.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maruf Ahmed Mridul, Kaiyang Chang, Aparna Gupta, Oshani Seneviratne
The global financial landscape is experiencing significant transformation driven by technological advancements and evolving market dynamics. Moreover, blockchain technology has become a pivotal platform with widespread applications, especially in finance. Cross-border payments have emerged as a key area of interest, with blockchain offering inherent benefits such as enhanced security, transparency, and efficiency compared to traditional banking systems. This paper presents a novel framework leveraging blockchain technology and smart contracts to emulate cross-border payments, ensuring interoperability and compliance with international standards such as ISO20022. Key contributions of this paper include a novel prototype framework for implementing smart contracts and web clients for streamlined transactions and a mechanism to translate ISO20022 standard messages. Our framework can provide a practical solution for secure, efficient, and transparent cross-border transactions, contributing to the ongoing evolution of global finance and the emerging landscape of decentralized finance.
{"title":"Smart Contracts, Smarter Payments: Innovating Cross Border Payments and Reporting Transactions","authors":"Maruf Ahmed Mridul, Kaiyang Chang, Aparna Gupta, Oshani Seneviratne","doi":"arxiv-2407.19283","DOIUrl":"https://doi.org/arxiv-2407.19283","url":null,"abstract":"The global financial landscape is experiencing significant transformation\u0000driven by technological advancements and evolving market dynamics. Moreover,\u0000blockchain technology has become a pivotal platform with widespread\u0000applications, especially in finance. Cross-border payments have emerged as a\u0000key area of interest, with blockchain offering inherent benefits such as\u0000enhanced security, transparency, and efficiency compared to traditional banking\u0000systems. This paper presents a novel framework leveraging blockchain technology\u0000and smart contracts to emulate cross-border payments, ensuring interoperability\u0000and compliance with international standards such as ISO20022. Key contributions\u0000of this paper include a novel prototype framework for implementing smart\u0000contracts and web clients for streamlined transactions and a mechanism to\u0000translate ISO20022 standard messages. Our framework can provide a practical\u0000solution for secure, efficient, and transparent cross-border transactions,\u0000contributing to the ongoing evolution of global finance and the emerging\u0000landscape of decentralized finance.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fashion is a powerful force in the modern world. It is one of the most accessible means of self-expression, thereby playing a significant role in our society. Yet, it is plagued by well-documented issues of waste and human rights abuses. Fast fashion in particular, characterized by its disposable nature, contributes extensively to environmental degradation and CO$_2$ emissions, surpassing the combined outputs of France, Germany, and the UK, but its economic contributions have somewhat shielded it from criticism. In this paper, we examine the demand for fast fashion, with a focus on Spain. We explore the individual decision-making process involved in choosing to buy fast fashion and the role of awareness regarding working conditions, environmental consequences, and education on sustainable fashion in influencing consumer behavior. By employing Agent-Based Modeling, we investigate the factors influencing garment consumption patterns and how shifts in public opinion can be achieved through peer pressure, social media influence, and government interventions. Our study revealed that government interventions are pivotal, with the state's campaigns setting the overall tone for progress, although its success is conditioned by social media and polarization levels of the population. Importantly, the state does not need to adopt an extremely proactive stance or continue the campaigns indefinitely to achieve optimal results, as excessive interventions yield diminishing returns.
{"title":"Agent-Based Insight into Eco-Choices: Simulating the Fast Fashion Shift","authors":"Daria Soboleva, Angel Sánchez","doi":"arxiv-2407.18814","DOIUrl":"https://doi.org/arxiv-2407.18814","url":null,"abstract":"Fashion is a powerful force in the modern world. It is one of the most\u0000accessible means of self-expression, thereby playing a significant role in our\u0000society. Yet, it is plagued by well-documented issues of waste and human rights\u0000abuses. Fast fashion in particular, characterized by its disposable nature,\u0000contributes extensively to environmental degradation and CO$_2$ emissions,\u0000surpassing the combined outputs of France, Germany, and the UK, but its\u0000economic contributions have somewhat shielded it from criticism. In this paper,\u0000we examine the demand for fast fashion, with a focus on Spain. We explore the\u0000individual decision-making process involved in choosing to buy fast fashion and\u0000the role of awareness regarding working conditions, environmental consequences,\u0000and education on sustainable fashion in influencing consumer behavior. By\u0000employing Agent-Based Modeling, we investigate the factors influencing garment\u0000consumption patterns and how shifts in public opinion can be achieved through\u0000peer pressure, social media influence, and government interventions. Our study\u0000revealed that government interventions are pivotal, with the state's campaigns\u0000setting the overall tone for progress, although its success is conditioned by\u0000social media and polarization levels of the population. Importantly, the state\u0000does not need to adopt an extremely proactive stance or continue the campaigns\u0000indefinitely to achieve optimal results, as excessive interventions yield\u0000diminishing returns.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The wheel-soil interaction has great impact on the dynamics of off-road vehicles in terramechanics applications. The Soil Contact Model (SCM), which anchors an empirical method to characterize the frictional contact between a wheel and soil, has been widely used in off-road vehicle dynamics simulations because it quickly produces adequate results for many terramechanics applications. The SCM approach calls for a set of model parameters that are obtained via a bevameter test. This test is expensive and time consuming to carry out, and in some cases difficult to set up, e.g., in extraterrestrial applications. We propose an approach to address these concerns by conducting the bevameter test in simulation, using a model that captures the physics of the actual experiment with high fidelity. To that end, we model the bevameter test rig as a multibody system, while the dynamics of the soil is captured using a discrete element model (DEM). The multibody dynamics--soil dynamics co-simulation is used to replicate the bevameter test, producing high-fidelity ground truth test data that is subsequently used to calibrate the SCM parameters within a Bayesian inference framework. To test the accuracy of the resulting SCM terramechanics, we run single wheel and full rover simulations using both DEM and SCM terrains. The SCM results match well with those produced by the DEM solution, and the simulation time for SCM is two to three orders of magnitude lower than that of DEM. All simulations in this work are performed using Chrono, an open-source, publicly available simulator. The scripts and models used are available in a public repository for reproducibility studies and further research.
车轮与土壤之间的相互作用对越野车在地形力学应用中的动态性能有很大影响。土壤接触模型(SCM)是表征车轮与土壤之间摩擦接触的一种经验方法,因其能快速生成适用于许多梯田机械应用的结果,已被广泛应用于越野车动力学仿真。单片机方法需要一组模型参数,这些参数通过双参数测试获得。这种测试既昂贵又耗时,而且在某些情况下很难设置,例如在地外应用中。为了解决这些问题,我们提出了一种在模拟中进行比容计测试的方法,使用的模型能够高保真地捕捉实际实验的物理过程。为此,我们将贝弗米特试验台作为一个多体系统建模,同时使用离散元件模型(DEM)捕捉土壤的动力学特性。多体动力学-土壤动力学协同模拟用于复制贝伐米试验,产生高保真地面实况试验数据,随后用于在贝叶斯推理框架内校准单片机参数。为了测试 SCM 地形力学结果的准确性,我们使用 DEM 和 SCM 地形进行了单轮和全漫游车模拟。SCM 的结果与 DEM 解决方案的结果非常吻合,而且 SCM 的模拟时间比 DEM 的模拟时间短 2 到 3 个数量级。这项工作中的所有模拟都是使用 Chrono 进行的,Chrono 是一个开源、公开可用的模拟器。所使用的脚本和模型可在公共存储库中获取,用于可重复性研究和进一步研究。
{"title":"Using high-fidelity discrete element simulation to calibrate an expeditious terramechanics model in a multibody dynamics framework","authors":"Yuemin Zhang, Junpeng Dai, Wei Hu, Dan Negrut","doi":"arxiv-2407.18903","DOIUrl":"https://doi.org/arxiv-2407.18903","url":null,"abstract":"The wheel-soil interaction has great impact on the dynamics of off-road\u0000vehicles in terramechanics applications. The Soil Contact Model (SCM), which\u0000anchors an empirical method to characterize the frictional contact between a\u0000wheel and soil, has been widely used in off-road vehicle dynamics simulations\u0000because it quickly produces adequate results for many terramechanics\u0000applications. The SCM approach calls for a set of model parameters that are\u0000obtained via a bevameter test. This test is expensive and time consuming to\u0000carry out, and in some cases difficult to set up, e.g., in extraterrestrial\u0000applications. We propose an approach to address these concerns by conducting\u0000the bevameter test in simulation, using a model that captures the physics of\u0000the actual experiment with high fidelity. To that end, we model the bevameter\u0000test rig as a multibody system, while the dynamics of the soil is captured\u0000using a discrete element model (DEM). The multibody dynamics--soil dynamics\u0000co-simulation is used to replicate the bevameter test, producing high-fidelity\u0000ground truth test data that is subsequently used to calibrate the SCM\u0000parameters within a Bayesian inference framework. To test the accuracy of the\u0000resulting SCM terramechanics, we run single wheel and full rover simulations\u0000using both DEM and SCM terrains. The SCM results match well with those produced\u0000by the DEM solution, and the simulation time for SCM is two to three orders of\u0000magnitude lower than that of DEM. All simulations in this work are performed\u0000using Chrono, an open-source, publicly available simulator. The scripts and\u0000models used are available in a public repository for reproducibility studies\u0000and further research.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-world physical systems, like composite materials and porous media, exhibit complex heterogeneities and multiscale nature, posing significant computational challenges. Computational homogenization is useful for predicting macroscopic properties from the microscopic material constitution. It involves defining a representative volume element (RVE), solving governing equations, and evaluating its properties such as conductivity and elasticity. Despite its effectiveness, the approach can be computationally expensive. This study proposes a tensor-train (TT)-based asymptotic homogenization method to address these challenges. By deriving boundary value problems at the microscale and expressing them in the TT format, the proposed method estimates material properties efficiently. We demonstrate its validity and effectiveness through numerical experiments applying the proposed method for homogenization of thermal conductivity and elasticity in two- and three-dimensional materials, offering a promising solution for handling the multiscale nature of heterogeneous systems.
{"title":"Efficient computational homogenization via tensor train format","authors":"Yuki Sato, Yuto Lewis Terashima, Ruho Kondo","doi":"arxiv-2407.18870","DOIUrl":"https://doi.org/arxiv-2407.18870","url":null,"abstract":"Real-world physical systems, like composite materials and porous media,\u0000exhibit complex heterogeneities and multiscale nature, posing significant\u0000computational challenges. Computational homogenization is useful for predicting\u0000macroscopic properties from the microscopic material constitution. It involves\u0000defining a representative volume element (RVE), solving governing equations,\u0000and evaluating its properties such as conductivity and elasticity. Despite its\u0000effectiveness, the approach can be computationally expensive. This study\u0000proposes a tensor-train (TT)-based asymptotic homogenization method to address\u0000these challenges. By deriving boundary value problems at the microscale and\u0000expressing them in the TT format, the proposed method estimates material\u0000properties efficiently. We demonstrate its validity and effectiveness through\u0000numerical experiments applying the proposed method for homogenization of\u0000thermal conductivity and elasticity in two- and three-dimensional materials,\u0000offering a promising solution for handling the multiscale nature of\u0000heterogeneous systems.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueya Wang, Yiming Zhang, Minjie Wen, Herbert Mang
Cracking Elements Method (CEM) is a numerical tool to simulate quasi-brittle fractures, which does not need remeshing, nodal enrichment, or complicated crack tracking strategy. The cracking elements used in the CEM can be considered as a special type of finite element implemented in the standard finite element frameworks. One disadvantage of CEM is that it uses nonlinear interpolation of the displacement field (Q8 or T6 elements), introducing more nodes and consequent computing efforts than the cases with elements using linear interpolation of the displacement field. Aiming at solving this problem, we propose a simple hybrid linear and non-linear interpolation finite element for adaptive cracking elements method in this work. A simple strategy is proposed for treating the elements with $p$ edge nodes $pinleft[0,nright]$ and $n$ being the edge number of the element. Only a few codes are needed. Then, by only adding edge and center nodes on the elements experiencing cracking and keeping linear interpolation of the displacement field for the elements outside the cracking domain, the number of total nodes was reduced almost to half of the case using the conventional cracking elements. Numerical investigations prove that the new approach inherits all the advantages of CEM with greatly improved computing efficiency.
{"title":"A simple hybrid linear and non-linear interpolation finite element for adaptive cracking elements method","authors":"Xueya Wang, Yiming Zhang, Minjie Wen, Herbert Mang","doi":"arxiv-2407.17104","DOIUrl":"https://doi.org/arxiv-2407.17104","url":null,"abstract":"Cracking Elements Method (CEM) is a numerical tool to simulate quasi-brittle\u0000fractures, which does not need remeshing, nodal enrichment, or complicated\u0000crack tracking strategy. The cracking elements used in the CEM can be\u0000considered as a special type of finite element implemented in the standard\u0000finite element frameworks. One disadvantage of CEM is that it uses nonlinear\u0000interpolation of the displacement field (Q8 or T6 elements), introducing more\u0000nodes and consequent computing efforts than the cases with elements using\u0000linear interpolation of the displacement field. Aiming at solving this problem,\u0000we propose a simple hybrid linear and non-linear interpolation finite element\u0000for adaptive cracking elements method in this work. A simple strategy is\u0000proposed for treating the elements with $p$ edge nodes $pinleft[0,nright]$\u0000and $n$ being the edge number of the element. Only a few codes are needed.\u0000Then, by only adding edge and center nodes on the elements experiencing\u0000cracking and keeping linear interpolation of the displacement field for the\u0000elements outside the cracking domain, the number of total nodes was reduced\u0000almost to half of the case using the conventional cracking elements. Numerical\u0000investigations prove that the new approach inherits all the advantages of CEM\u0000with greatly improved computing efficiency.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}