{"title":"利用线性和非线性积分神经网络模拟逼近器","authors":"Yoshiharu Iwata, Kouji Fujishiro, Hidefumi Wakamatsu","doi":"10.5687/iscie.36.243","DOIUrl":null,"url":null,"abstract":"The construction of approximators for simulations, such as the finite element method using machine learning, has the problem of both reducing training data generation time and achieving approximation accuracy. Hybrid neural networks have been proposed to solve this problem as a fast approximator for simulation. Even if built with a simple perceptron with a linear activation function created based on deductive knowledge using conventional approximation techniques such as multiple regression analysis, the number of phenomena that can be modeled by deductive knowledge is limited in simulations of complex structures. As a result, there will be errors in the predictions of the approximators. In contrast, hybrid neural networks allow neural networks to learn errors in predictions to create correction approximators, allowing approximators to account for effects that cannot be expressed by multiple regression analysis. This paper proposes a neural network with a structure that integrates these approximators. The first proposed Hybrid Neural Network (HNN) approximator trains a linear approximator, and then a nonlinear approximator learns the error part. In contrast, the Integration Neural Network (INN) simultaneously learns the linear and nonlinear approximators to optimize the learning ratio by training. This method allows INNs to improve the accuracy of approximators and reduce the conflict between the number of training data and accuracy.","PeriodicalId":489536,"journal":{"name":"Shisutemu Seigyo Jōhō Gakkai ronbunshi","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Simulation Approximators Using Linear and Nonlinear Integration Neural Networks\",\"authors\":\"Yoshiharu Iwata, Kouji Fujishiro, Hidefumi Wakamatsu\",\"doi\":\"10.5687/iscie.36.243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The construction of approximators for simulations, such as the finite element method using machine learning, has the problem of both reducing training data generation time and achieving approximation accuracy. Hybrid neural networks have been proposed to solve this problem as a fast approximator for simulation. Even if built with a simple perceptron with a linear activation function created based on deductive knowledge using conventional approximation techniques such as multiple regression analysis, the number of phenomena that can be modeled by deductive knowledge is limited in simulations of complex structures. As a result, there will be errors in the predictions of the approximators. In contrast, hybrid neural networks allow neural networks to learn errors in predictions to create correction approximators, allowing approximators to account for effects that cannot be expressed by multiple regression analysis. This paper proposes a neural network with a structure that integrates these approximators. The first proposed Hybrid Neural Network (HNN) approximator trains a linear approximator, and then a nonlinear approximator learns the error part. In contrast, the Integration Neural Network (INN) simultaneously learns the linear and nonlinear approximators to optimize the learning ratio by training. This method allows INNs to improve the accuracy of approximators and reduce the conflict between the number of training data and accuracy.\",\"PeriodicalId\":489536,\"journal\":{\"name\":\"Shisutemu Seigyo Jōhō Gakkai ronbunshi\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Shisutemu Seigyo Jōhō Gakkai ronbunshi\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5687/iscie.36.243\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Shisutemu Seigyo Jōhō Gakkai ronbunshi","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5687/iscie.36.243","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Simulation Approximators Using Linear and Nonlinear Integration Neural Networks
The construction of approximators for simulations, such as the finite element method using machine learning, has the problem of both reducing training data generation time and achieving approximation accuracy. Hybrid neural networks have been proposed to solve this problem as a fast approximator for simulation. Even if built with a simple perceptron with a linear activation function created based on deductive knowledge using conventional approximation techniques such as multiple regression analysis, the number of phenomena that can be modeled by deductive knowledge is limited in simulations of complex structures. As a result, there will be errors in the predictions of the approximators. In contrast, hybrid neural networks allow neural networks to learn errors in predictions to create correction approximators, allowing approximators to account for effects that cannot be expressed by multiple regression analysis. This paper proposes a neural network with a structure that integrates these approximators. The first proposed Hybrid Neural Network (HNN) approximator trains a linear approximator, and then a nonlinear approximator learns the error part. In contrast, the Integration Neural Network (INN) simultaneously learns the linear and nonlinear approximators to optimize the learning ratio by training. This method allows INNs to improve the accuracy of approximators and reduce the conflict between the number of training data and accuracy.