In recent years, the implementation of machine learning applications started to apply in other possible fields, such as economics, especially investment. But, many methods and modeling are used without knowing the most suitable one for predicting particular data. This study aims to find the most suitable model for predicting stock prices using statistical learning with RNN, LSTM, and GRU deep learning methods using stock price data for 4 (four) major banks in Indonesia, namely BRI, BNI, BCA, and Mandiri, from 2013 to 2022. The result showed that the ARIMA Box-Jenkins modeling is unsuitable for predicting BRI, BNI, BCA, and Bank Mandiri stock prices. In comparison, GRU presented the best performance in the case of predicting the stock prices of BRI, BNI, BCA, and Bank Mandiri.
{"title":"PREDICTING BANKING STOCK PRICES USING RNN, LSTM, AND GRU APPROACH","authors":"Dias Satria","doi":"10.35784/acs-2023-06","DOIUrl":"https://doi.org/10.35784/acs-2023-06","url":null,"abstract":"In recent years, the implementation of machine learning applications started to apply in other possible fields, such as economics, especially investment. But, many methods and modeling are used without knowing the most suitable one for predicting particular data. This study aims to find the most suitable model for predicting stock prices using statistical learning with RNN, LSTM, and GRU deep learning methods using stock price data for 4 (four) major banks in Indonesia, namely BRI, BNI, BCA, and Mandiri, from 2013 to 2022. The result showed that the ARIMA Box-Jenkins modeling is unsuitable for predicting BRI, BNI, BCA, and Bank Mandiri stock prices. In comparison, GRU presented the best performance in the case of predicting the stock prices of BRI, BNI, BCA, and Bank Mandiri.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44366579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maintenance has a key impact on the efficiency of the production processes because the efficiency of the machines determines the ability of the system to produce in accordance with the assumed schedule. The key element of the system performance assessment remains the availability of technological equipment, which directly translates into the efficiency and effectiveness of the performed production tasks. Taking into account the dynamic nature of manufacturing processes, the proper selection of machinery and equipment for the implementation of specific production tasks becomes an issue of particular importance. The purpose of this research was to determine the impact of technical and non-technical factors on the material selection of machine tools for production tasks and to develop a method of supporting the selection of production resources using the AHP and Fuzzy AHP methods. The research was carried out in a manufacturing company from the automotive industry.
{"title":"IDENTIFICATION OF THE IMPACT OF THE AVAILABILITY FACTOR ON THE EFFICIENCY OF PRODUCTION PROCESSES USING THE AHP AND FUZZY AHP METHODS","authors":"P. Wittbrodt","doi":"10.35784/acs-2022-32","DOIUrl":"https://doi.org/10.35784/acs-2022-32","url":null,"abstract":"Maintenance has a key impact on the efficiency of the production processes because the efficiency of the machines determines the ability of the system to produce in accordance with the assumed schedule. The key element of the system performance assessment remains the availability of technological equipment, which directly translates into the efficiency and effectiveness of the performed production tasks. Taking into account the dynamic nature of manufacturing processes, the proper selection of machinery and equipment for the implementation of specific production tasks becomes an issue of particular importance. The purpose of this research was to determine the impact of technical and non-technical factors on the material selection of machine tools for production tasks and to develop a method of supporting the selection of production resources using the AHP and Fuzzy AHP methods. The research was carried out in a manufacturing company from the automotive industry.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42329574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a comparative analysis of the operation of two variants of centrifugal pump rotors, a description of the main parameters, and the influence of the blade geometry on the performance characteristics obtained. Rotors have been designed using the arc and point method. Based on the developed 3D CAD models, the rotors were printed using the rapid prototyping method on a 3D printer in FFF (Fused Filament Fabrication) technology, in order to experimentally verify the performance, by placing them on the Armfield FM50 test stand. The analysis part of the CFD includes a fluid flow in Ansys Fluent. The process of creating a flow domain and generating a structural mesh was described, along with the definition of boundary conditions, the definition of physical conditions and the turbulence model. The distribution of pressures and velocities in the meridional sections is shown graphically. The chapter with the experimental analysis contains a description of the measuring stand and the methodology used. The results obtained made it possible to generate the characteristics, making it possible to compare the results received. The results allowed to note the influence of geometry on the behavior of the rotors during operation in the system and to indicate that the arc rotor gets a 7% higher head and 2% higher efficiency than the point method rotor, which gives the basis for its commercial use in industry.
{"title":"NUMERICAL AND EXPERIMENTAL ANALYSIS OF A CENTRIFUGAL PUMP WITH DIFFERENT ROTOR GEOMETRIES","authors":"Łukasz Semkło, Ł. Gierz","doi":"10.35784/acs-2022-30","DOIUrl":"https://doi.org/10.35784/acs-2022-30","url":null,"abstract":"The paper presents a comparative analysis of the operation of two variants of centrifugal pump rotors, a description of the main parameters, and the influence of the blade geometry on the performance characteristics obtained. Rotors have been designed using the arc and point method. Based on the developed 3D CAD models, the rotors were printed using the rapid prototyping method on a 3D printer in FFF (Fused Filament Fabrication) technology, in order to experimentally verify the performance, by placing them on the Armfield FM50 test stand. The analysis part of the CFD includes a fluid flow in Ansys Fluent. The process of creating a flow domain and generating a structural mesh was described, along with the definition of boundary conditions, the definition of physical conditions and the turbulence model. The distribution of pressures and velocities in the meridional sections is shown graphically. The chapter with the experimental analysis contains a description of the measuring stand and the methodology used. The results obtained made it possible to generate the characteristics, making it possible to compare the results received. The results allowed to note the influence of geometry on the behavior of the rotors during operation in the system and to indicate that the arc rotor gets a 7% higher head and 2% higher efficiency than the point method rotor, which gives the basis for its commercial use in industry.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43587558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elmehdi Benmalek, J. El mhamdi, A. Jilbab, A. Jbari
In 2019, the whole world is facing a health emergency due to the emergence of the coronavirus (COVID-19). About 223 countries are affected by the coronavirus. Medical and health services face difficulties to manage the disease, which requires a significant amount of health system resources. Several artificial intelligence-based systems are designed to automatically detect COVID-19 for limiting the spread of the virus. Researchers have found that this virus has a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we performed binary classification, distinguishing positive COVID patients from healthy controls. The records are collected from the Coswara Dataset, a crowdsourcing project from the Indian Institute of Science (IIS). After data collection, we extracted the MFCC from the cough records. These acoustic features are mapped directly to the Decision Tree (DT), k-nearest neighbor (kNN) for k equals to 3, support vector machine (SVM), and deep neural network (DNN), or after a dimensionality reduction using principal component analysis (PCA), with 95 percent variance or 6 principal components. The 3NN classifier with all features has produced the best classification results. It detects COVID-19 patients with an accuracy of 97.48 percent, 96.96 percent f1-score, and 0.95 MCC. Suggesting that this method can accurately distinguish healthy controls and COVID-19 patients.
{"title":"A COUGH-BASED COVID-19 DETECTION SYSTEM USING PCA AND MACHINE LEARNING CLASSIFIERS","authors":"Elmehdi Benmalek, J. El mhamdi, A. Jilbab, A. Jbari","doi":"10.35784/acs-2022-31","DOIUrl":"https://doi.org/10.35784/acs-2022-31","url":null,"abstract":"In 2019, the whole world is facing a health emergency due to the emergence of the coronavirus (COVID-19). About 223 countries are affected by the coronavirus. Medical and health services face difficulties to manage the disease, which requires a significant amount of health system resources. Several artificial intelligence-based systems are designed to automatically detect COVID-19 for limiting the spread of the virus. Researchers have found that this virus has a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we performed binary classification, distinguishing positive COVID patients from healthy controls. The records are collected from the Coswara Dataset, a crowdsourcing project from the Indian Institute of Science (IIS). After data collection, we extracted the MFCC from the cough records. These acoustic features are mapped directly to the Decision Tree (DT), k-nearest neighbor (kNN) for k equals to 3, support vector machine (SVM), and deep neural network (DNN), or after a dimensionality reduction using principal component analysis (PCA), with 95 percent variance or 6 principal components. The 3NN classifier with all features has produced the best classification results. It detects COVID-19 patients with an accuracy of 97.48 percent, 96.96 percent f1-score, and 0.95 MCC. Suggesting that this method can accurately distinguish healthy controls and COVID-19 patients.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42130879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konrad Biercewicz, M. Borawski, A. Borawska, Jarosław Duda
Due to the popularity of video games in various applications, including both commercial and social marketing, there is a need to assess their content in terms of player satisfaction, already at the production stage. For this purpose, the indices used in EEG tests can be used. In this publication, a formula has been created based on the player's commitment to determining which elements in the game should be improved and for which graphic emblems connected with social campaigns were more memorable and whether this was related to commitment. The survey was conducted using a 2D platform game created in Unity based on observations of 28 recipients. To evaluate the elements occurring in the game at which we obtain a higher memory for graphic characters, a corresponding pattern was created based on player involvement. The optimal Index for moving and static objects and the Index for destruction were then selected based on the feedback. Referring to the issue of graphic emblems depicting social campaigns should be placed in a place where other activities such as fighting will not be distracted, everyone will be able to reach the level where the recently placed advertisement is. This study present the developed method to determine the degree of player's engagement in particular elements in the game using the EEG and to explore the relationship between the visibility of social advertising and engagement in a 2D platform game where the player has to collect three keys and defeat the ultimate opponent.
{"title":"DETERMINING THE DEGREE OF PLAYER ENGAGEMENT IN A COMPUTER GAME WITH ELEMENTS OF A SOCIAL CAMPAIGN USING COGNITIVE NEUROSCIENCE TECHNIQUES","authors":"Konrad Biercewicz, M. Borawski, A. Borawska, Jarosław Duda","doi":"10.35784/acs-2022-27","DOIUrl":"https://doi.org/10.35784/acs-2022-27","url":null,"abstract":"Due to the popularity of video games in various applications, including both commercial and social marketing, there is a need to assess their content in terms of player satisfaction, already at the production stage. For this purpose, the indices used in EEG tests can be used. In this publication, a formula has been created based on the player's commitment to determining which elements in the game should be improved and for which graphic emblems connected with social campaigns were more memorable and whether this was related to commitment. The survey was conducted using a 2D platform game created in Unity based on observations of 28 recipients. To evaluate the elements occurring in the game at which we obtain a higher memory for graphic characters, a corresponding pattern was created based on player involvement. The optimal Index for moving and static objects and the Index for destruction were then selected based on the feedback. Referring to the issue of graphic emblems depicting social campaigns should be placed in a place where other activities such as fighting will not be distracted, everyone will be able to reach the level where the recently placed advertisement is. This study present the developed method to determine the degree of player's engagement in particular elements in the game using the EEG and to explore the relationship between the visibility of social advertising and engagement in a 2D platform game where the player has to collect three keys and defeat the ultimate opponent. ","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48161062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today’s highly computerized world, data compression is a key issue to minimize the costs associated with data storage and transfer. In 2019, more than 70% of the data sent over the network were images. This paper analyses the feasibility of using the SVD algorithm in image compression and shows that it improves the efficiency of JPEG and JPEG2000 compression. Image matrices were decomposed using the SVD algorithm before compression. It has also been shown that as the image dimensions increase, the fraction of eigenvalues that must be used to reconstruct the image in good quality decreases. The study was carried out on a large and diverse set of images, more than 2500 images were examined. The results were analyzed based on criteria typical for the evaluation of numerical algorithms operating on matrices and image compression: compression ratio, size of compressed file, MSE, number of bad pixels, complexity, numerical stability, easiness of implementation.
{"title":"ANALYSIS OF THE POSSIBILITY OF USING THE SINGULAR VALUE DECOMPOSITION IN IMAGE COMPRESSION","authors":"E. Łukasik, Emilia Łabuć","doi":"10.35784/acs-2022-28","DOIUrl":"https://doi.org/10.35784/acs-2022-28","url":null,"abstract":"In today’s highly computerized world, data compression is a key issue to minimize the costs associated with data storage and transfer. In 2019, more than 70% of the data sent over the network were images. This paper analyses the feasibility of using the SVD algorithm in image compression and shows that it improves the efficiency of JPEG and JPEG2000 compression. Image matrices were decomposed using the SVD algorithm before compression. It has also been shown that as the image dimensions increase, the fraction of eigenvalues that must be used to reconstruct the image in good quality decreases. The study was carried out on a large and diverse set of images, more than 2500 images were examined. The results were analyzed based on criteria typical for the evaluation of numerical algorithms operating on matrices and image compression: compression ratio, size of compressed file, MSE, number of bad pixels, complexity, numerical stability, easiness of implementation. ","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44470649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kulisz, J. Kujawska, Z. Aubakirova, G. Zhairbaeva, Tomasz Warowny
The paper evaluated the possibility of using artificial neural network models for predicting the compressive strength (Fc) of concretes with the addition of recycled concrete aggregate (RCA). The artificial neural network (ANN) approaches were used for three variable processes modeling (cement content in the range of 250 to 400 kg/m3, percentage of recycled concrete aggregate from 25% to 100% and the ratios of water contents 0.45 to 0.6). The results indicate that the compressive strength of recycled concrete at 3, 7 and 28 days is strongly influenced by the cement content, %RCA and the ratios of water contents. It is found that the compressive strength at 3, 7 and 28 days decreases when increasing RCA from 25% to 100%. The obtained MLP and RBF networks are characterized by satisfactory capacity for prediction of the compressive strength of concretes with recycled concrete aggregate (RCA) addition. The results in statistical terms; correlation coefficient (R) reveals that the both ANN approaches are powerful tools for the prediction of the compressive strength.
{"title":"PREDICTION OF THE COMPRESSIVE STRENGTH OF ENVIRONMENTALLY FRIENDLY CONCRETE USING ARTIFICIAL NEURAL NETWORK","authors":"M. Kulisz, J. Kujawska, Z. Aubakirova, G. Zhairbaeva, Tomasz Warowny","doi":"10.35784/acs-2022-29","DOIUrl":"https://doi.org/10.35784/acs-2022-29","url":null,"abstract":"The paper evaluated the possibility of using artificial neural network models for predicting the compressive strength (Fc) of concretes with the addition of recycled concrete aggregate (RCA). The artificial neural network (ANN) approaches were used for three variable processes modeling (cement content in the range of 250 to 400 kg/m3, percentage of recycled concrete aggregate from 25% to 100% and the ratios of water contents 0.45 to 0.6). The results indicate that the compressive strength of recycled concrete at 3, 7 and 28 days is strongly influenced by the cement content, %RCA and the ratios of water contents. It is found that the compressive strength at 3, 7 and 28 days decreases when increasing RCA from 25% to 100%. The obtained MLP and RBF networks are characterized by satisfactory capacity for prediction of the compressive strength of concretes with recycled concrete aggregate (RCA) addition. The results in statistical terms; correlation coefficient (R) reveals that the both ANN approaches are powerful tools for the prediction of the compressive strength. ","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45196460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study we present simulation system based on Gillespie algorithm for generating evolutionary events in the evolution scenario of microbial population. We present Gillespie simulation system adjusted to reproducing experimental data obtained in barcoding studies – experimental techniques in microbiology allowing tracing microbial populations with very high resolution. Gillespie simulation engine is constructed by defining its state vector and rules for its modifications. In order to efficiently simulate barcoded experiment by using Gillespie algorithm we provide modification - binning cells by lineages. Different bins define components of state in the Gillespie algorithm. The elaborated simulation model captures events in microbial population growth including death, division and mutations of cells. The obtained simulation results reflect population behavior, mutation wave and mutation distribution along generations. The elaborated methodology is confronted against literature data of experimental evolution of yeast tracking clones sub-generations. Simulation model was fitted to measurements in experimental data leading to good agreement.
{"title":"APPLICATION OF GILLESPIE ALGORITHM FOR SIMULATING EVOLUTION OF FITNESS OF MICROBIAL POPULATION","authors":"J. Gil, A. Polański","doi":"10.35784/acs-2022-25","DOIUrl":"https://doi.org/10.35784/acs-2022-25","url":null,"abstract":"In this study we present simulation system based on Gillespie algorithm for generating evolutionary events in the evolution scenario of microbial population. We present Gillespie simulation system adjusted to reproducing experimental data obtained in barcoding studies – experimental techniques in microbiology allowing tracing microbial populations with very high resolution. Gillespie simulation engine is constructed by defining its state vector and rules for its modifications. In order to efficiently simulate barcoded experiment by using Gillespie algorithm we provide modification - binning cells by lineages. Different bins define components of state in the Gillespie algorithm. The elaborated simulation model captures events in microbial population growth including death, division and mutations of cells. The obtained simulation results reflect population behavior, mutation wave and mutation distribution along generations. The elaborated methodology is confronted against literature data of experimental evolution of yeast tracking clones sub-generations. Simulation model was fitted to measurements in experimental data leading to good agreement.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45468471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheikh Amir Fayaz, Majid Zaman, M. A. Butt, S. Kaul
Rainfall prediction is one of the most challenging task faced by researchers over the years. Many machine learning and AI based algorithms have been implemented on different datasets for better prediction purposes, but there is not a single solution which perfectly predicts the rainfall. Accurate prediction still remains a question to researchers. We offer a machine learning-based comparison evaluation of rainfall models for Kashmir province. Both local geographic features and the time horizon has influence on weather forecasting. Decision trees, Logistic Model Trees (LMT), and M5 model trees are examples of predictive models based on algorithms. GWLM-NARX, Gradient Boosting, and other techniques were investigated. Weather predictors measured from three major meteorological stations in the Kashmir area of the UT of J&K, India, were utilized in the models. We compared the proposed models based on their accuracy, kappa, interpretability, and other statistics, as well as the significance of the predictors utilized. On the original dataset, the DT model delivers an accuracy of 80.12 percent, followed by the LMT and Gradient boosting models, which produce accuracy of 87.23 percent and 87.51 percent, respectively. Furthermore, when continuous data was used in the M5-MT and GWLM-NARX models, the NARX model performed better, with mean squared error (MSE) and regression value (R) predictions of 3.12 percent and 0.9899 percent in training, 0.144 percent and 0.9936 percent in validation, and 0.311 percent and 0.9988 percent in testing.
{"title":"HOW MACHINE LEARNING ALGORITHMS ARE USED IN METEOROLOGICAL DATA CLASSIFICATION: A COMPARATIVE APPROACH BETWEEN DT, LMT, M5-MT, GRADIENT BOOSTING AND GWLM-NARX MODELS","authors":"Sheikh Amir Fayaz, Majid Zaman, M. A. Butt, S. Kaul","doi":"10.35784/acs-2022-26","DOIUrl":"https://doi.org/10.35784/acs-2022-26","url":null,"abstract":"Rainfall prediction is one of the most challenging task faced by researchers over the years. Many machine learning and AI based algorithms have been implemented on different datasets for better prediction purposes, but there is not a single solution which perfectly predicts the rainfall. Accurate prediction still remains a question to researchers. We offer a machine learning-based comparison evaluation of rainfall models for Kashmir province. Both local geographic features and the time horizon has influence on weather forecasting. Decision trees, Logistic Model Trees (LMT), and M5 model trees are examples of predictive models based on algorithms. GWLM-NARX, Gradient Boosting, and other techniques were investigated. Weather predictors measured from three major meteorological stations in the Kashmir area of the UT of J&K, India, were utilized in the models. We compared the proposed models based on their accuracy, kappa, interpretability, and other statistics, as well as the significance of the predictors utilized. On the original dataset, the DT model delivers an accuracy of 80.12 percent, followed by the LMT and Gradient boosting models, which produce accuracy of 87.23 percent and 87.51 percent, respectively. Furthermore, when continuous data was used in the M5-MT and GWLM-NARX models, the NARX model performed better, with mean squared error (MSE) and regression value (R) predictions of 3.12 percent and 0.9899 percent in training, 0.144 percent and 0.9936 percent in validation, and 0.311 percent and 0.9988 percent in testing.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45622710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper discusses the solving of inverse thermomechanical problems requiring a large number of FEM tasks with various boundary conditions. The study examined the case when all tasks have the same number of nodes, finite elements, and nodal connections. In this study, the speedup of the solution of the inverse problem is achieved in two ways: 1. The solution of all FEM tasks in parallel mode. 2. The use by all FEM tasks a common matrix with addresses of nonzero elements in the stiffness matrices. These algorithms are implemented in the own FEM code, designed to solve inverse problems of the hot metal forming. The calculations showed that developed code in parallel mode is effective for the number of tasks late than 0,7-0,9 of the number of available processors. Thus, at some point, it becomes effective to use a sequential solution to all tasks and to use a common matrix of addresses of nonzero elements in the stiffness matrix. The achieved acceleration at the optimal choice of the algorithm is 2–10 times compared with the classical multivariate calculations in the FEM. The paper provides an example of the practical application of the developed code for calculating the allowable processing maps for laser dieless drawing of ultra-thin wire from copper alloy by solving the thermomechanical inverse problem. The achieved acceleration made it possible to use the developed parallel code in the control software of the laboratory setup for laser dieless drawing.
{"title":"PARALLEL SOLUTION OF THERMOMECHANICAL INVERSE PROBLEMS FOR LASER DIELESS DRAWING OF ULTRA-THIN WIRE","authors":"A. Milenin","doi":"10.35784/acs-2022-20","DOIUrl":"https://doi.org/10.35784/acs-2022-20","url":null,"abstract":"The paper discusses the solving of inverse thermomechanical problems requiring a large number of FEM tasks with various boundary conditions. The study examined the case when all tasks have the same number of nodes, finite elements, and nodal connections. In this study, the speedup of the solution of the inverse problem is achieved in two ways: 1. The solution of all FEM tasks in parallel mode. 2. The use by all FEM tasks a common matrix with addresses of nonzero elements in the stiffness matrices. These algorithms are implemented in the own FEM code, designed to solve inverse problems of the hot metal forming. The calculations showed that developed code in parallel mode is effective for the number of tasks late than 0,7-0,9 of the number of available processors. Thus, at some point, it becomes effective to use a sequential solution to all tasks and to use a common matrix of addresses of nonzero elements in the stiffness matrix. The achieved acceleration at the optimal choice of the algorithm is 2–10 times compared with the classical multivariate calculations in the FEM. The paper provides an example of the practical application of the developed code for calculating the allowable processing maps for laser dieless drawing of ultra-thin wire from copper alloy by solving the thermomechanical inverse problem. The achieved acceleration made it possible to use the developed parallel code in the control software of the laboratory setup for laser dieless drawing.","PeriodicalId":36379,"journal":{"name":"Applied Computer Science","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45855564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}