Connected vehicle fleets have formed significant component of industrial internet of things scenarios as part of Industry 4.0 worldwide. The number of vehicles in these fleets has grown at a steady pace. The vehicles monitoring with machine learning algorithms has significantly improved maintenance activities. Predictive maintenance potential has increased where machines are controlled through networked smart devices. Here, benefits are accrued considering uptimes optimization. This has resulted in reduction of associated time and labor costs. It has also provided significant increase in cost benefit ratios. Considering vehicle fault trends in this research predictive maintenance problem is addressed through hybrid deep learning-based ensemble method (HDLEM). The ensemble framework which acts as predictive analytics engine comprises of three deep learning algorithms viz modified cox proportional hazard deep learning (MCoxPHDL), modified deep learning embedded semi supervised learning (MDLeSSL) and merged LSTM (MLSTM) networks. Both sensor as well as historical maintenance data are collected and prepared using benchmarking methods for HDLEM training and testing. Here, times between failures (TBF) modeling and prediction on multi-source data are successfully achieved. The results obtained are compared with stated deep learning models. This ensemble framework offers great potential towards achieving more profitable, efficient and sustainable vehicle fleet management solutions. This helps better telematics data implementation which ensures preventative management towards desired solution. The ensemble method's superiority is highlighted through several experimental results.
{"title":"Predictive maintenance of vehicle fleets through hybrid deep learning-based ensemble methods for industrial IoT datasets","authors":"Arindam Chaudhuri, Soumya K Ghosh","doi":"10.1093/jigpal/jzae017","DOIUrl":"https://doi.org/10.1093/jigpal/jzae017","url":null,"abstract":"Connected vehicle fleets have formed significant component of industrial internet of things scenarios as part of Industry 4.0 worldwide. The number of vehicles in these fleets has grown at a steady pace. The vehicles monitoring with machine learning algorithms has significantly improved maintenance activities. Predictive maintenance potential has increased where machines are controlled through networked smart devices. Here, benefits are accrued considering uptimes optimization. This has resulted in reduction of associated time and labor costs. It has also provided significant increase in cost benefit ratios. Considering vehicle fault trends in this research predictive maintenance problem is addressed through hybrid deep learning-based ensemble method (HDLEM). The ensemble framework which acts as predictive analytics engine comprises of three deep learning algorithms viz modified cox proportional hazard deep learning (MCoxPHDL), modified deep learning embedded semi supervised learning (MDLeSSL) and merged LSTM (MLSTM) networks. Both sensor as well as historical maintenance data are collected and prepared using benchmarking methods for HDLEM training and testing. Here, times between failures (TBF) modeling and prediction on multi-source data are successfully achieved. The results obtained are compared with stated deep learning models. This ensemble framework offers great potential towards achieving more profitable, efficient and sustainable vehicle fleet management solutions. This helps better telematics data implementation which ensures preventative management towards desired solution. The ensemble method's superiority is highlighted through several experimental results.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"10 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubén Ferrero-Guillén, José-Manuel Alija-Pérez, Alberto Martínez-Gutiérrez, Rubén Álvarez, Paula Verde, Javier Díez-González
Localization Wireless Sensor Networks (WSN) represent a research topic with increasing interest due to their numerous applications. However, the viability of these systems is compromised by the attained localization uncertainties once implemented, since the network performance is highly dependent on the sensors location. The Node Location Problem (NLP) aims to obtain the optimal distribution of sensors for a particular environment, a problem already categorized as NP-Hard. Furthermore, localization WSN usually perform a sensor selection for determining which nodes are to be utilized for maximizing the achieved accuracy. This problem, defined as the Sensor Selection Problem (SSP), has also been categorized as NP-Hard. While different metaheuristics have been proposed for attaining a near optimal solution in both problems, no approach has considered the two problems simultaneously, thus resulting in suboptimal solutions since the SSP is biased by the actual node distribution once deployed. In this paper, a combined approach of both problems simultaneously is proposed, thus considering the SSP within the NLP. Furthermore, a novel metaheuristic combining the Black Widow Optimization (BWO) algorithm and the Variable Neighbourhood Descent Chains (VND-Chains) local search, denominated as BWO-VND-Chains, is particularly devised for the first time in the author’s best knowledge for the NLP, resulting in a more efficient and robust optimization technique. Finally, a comparison of different metaheuristic algorithms is proposed over an actual urban scenario, considering different sensor selection criteria in order to attain the best methodology and selection technique. Results show that the newly devised algorithm with the SSP criteria optimization achieves mean localization uncertainties up to 19.66 % lower than traditional methodologies.
{"title":"Black widow optimization for reducing the target uncertainties in localization wireless sensor networks","authors":"Rubén Ferrero-Guillén, José-Manuel Alija-Pérez, Alberto Martínez-Gutiérrez, Rubén Álvarez, Paula Verde, Javier Díez-González","doi":"10.1093/jigpal/jzae032","DOIUrl":"https://doi.org/10.1093/jigpal/jzae032","url":null,"abstract":"Localization Wireless Sensor Networks (WSN) represent a research topic with increasing interest due to their numerous applications. However, the viability of these systems is compromised by the attained localization uncertainties once implemented, since the network performance is highly dependent on the sensors location. The Node Location Problem (NLP) aims to obtain the optimal distribution of sensors for a particular environment, a problem already categorized as NP-Hard. Furthermore, localization WSN usually perform a sensor selection for determining which nodes are to be utilized for maximizing the achieved accuracy. This problem, defined as the Sensor Selection Problem (SSP), has also been categorized as NP-Hard. While different metaheuristics have been proposed for attaining a near optimal solution in both problems, no approach has considered the two problems simultaneously, thus resulting in suboptimal solutions since the SSP is biased by the actual node distribution once deployed. In this paper, a combined approach of both problems simultaneously is proposed, thus considering the SSP within the NLP. Furthermore, a novel metaheuristic combining the Black Widow Optimization (BWO) algorithm and the Variable Neighbourhood Descent Chains (VND-Chains) local search, denominated as BWO-VND-Chains, is particularly devised for the first time in the author’s best knowledge for the NLP, resulting in a more efficient and robust optimization technique. Finally, a comparison of different metaheuristic algorithms is proposed over an actual urban scenario, considering different sensor selection criteria in order to attain the best methodology and selection technique. Results show that the newly devised algorithm with the SSP criteria optimization achieves mean localization uncertainties up to 19.66 % lower than traditional methodologies.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"106 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jon Díaz, Haizea Rodriguez, Jenny Fajardo-Calderín, Ignacio Angulo, Enrique Onieva
For companies involved in the supply chain, proper warehousing management is crucial. Warehouse layout arrangement and operation play a critical role in a company’s ability to maintain and improve its competitiveness. Reducing costs and increasing efficiency are two of the most crucial warehousing goals. Deciding on the best warehouse layout is a remarkable optimization problem. This paper uses an optimization method to set bin allocations within an automated warehouse with particular characteristics. The warehouse’s initial layout and the automated platforms limit the search and define the time required to move goods within the warehouse. With the help of historical data and the definition of the time needed to move goods, a mathematical model of warehouse operation was created. An optimization procedure based on the well-known Variable Neighbourhood Search algorithm is defined and applied to the problem. Experimental results demonstrate increments in the efficiency of warehousing operations.
{"title":"A variable neighbourhood search for minimization of operation times through warehouse layout optimization","authors":"Jon Díaz, Haizea Rodriguez, Jenny Fajardo-Calderín, Ignacio Angulo, Enrique Onieva","doi":"10.1093/jigpal/jzae018","DOIUrl":"https://doi.org/10.1093/jigpal/jzae018","url":null,"abstract":"For companies involved in the supply chain, proper warehousing management is crucial. Warehouse layout arrangement and operation play a critical role in a company’s ability to maintain and improve its competitiveness. Reducing costs and increasing efficiency are two of the most crucial warehousing goals. Deciding on the best warehouse layout is a remarkable optimization problem. This paper uses an optimization method to set bin allocations within an automated warehouse with particular characteristics. The warehouse’s initial layout and the automated platforms limit the search and define the time required to move goods within the warehouse. With the help of historical data and the definition of the time needed to move goods, a mathematical model of warehouse operation was created. An optimization procedure based on the well-known Variable Neighbourhood Search algorithm is defined and applied to the problem. Experimental results demonstrate increments in the efficiency of warehousing operations.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"60 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the Generalized Traveling Salesman Problem (GTSP), which is an extension of the well-known Traveling Salesman Problem (TSP), and it searches for an optimal tour in a clustered graph, such that every cluster is visited exactly once. In this paper, we describe a novel Memetic Algorithm (MA) for solving efficiently the GTSP. Our proposed MA has at its core a genetic algorithm (GA), completed by a Chromosome Enhancement Procedure (CEP), which is based on a TSP solver and the Shortest Path (SP) algorithm and for improving the convergence characteristics of the GA, a Local Search (LS) operation is applied for the best chromosomes in each generation. We tested our algorithm on a set of well-known instances from the literature and the achieved results prove that our novel memetic algorithm is highly competitive against the existing solution approaches from the specialized literature.
本文研究了广义旅行推销员问题(GTSP),它是著名的旅行推销员问题(TSP)的扩展,它在一个聚类图中寻找最佳巡回路线,使得每个聚类图都被访问一次。在本文中,我们介绍了一种高效解决 GTSP 的新型记忆算法 (MA)。我们提出的记忆算法以遗传算法(GA)为核心,由基于 TSP 求解器和最短路径(SP)算法的染色体增强程序(CEP)完成,为改善 GA 的收敛特性,对每一代中的最佳染色体采用了局部搜索(LS)操作。我们在文献中的一组著名实例上测试了我们的算法,结果证明我们的新型记忆算法与专业文献中的现有求解方法相比具有很强的竞争力。
{"title":"A novel memetic algorithm for solving the generalized traveling salesman problem","authors":"Ovidiu Cosma, Petrică C Pop, Laura Cosma","doi":"10.1093/jigpal/jzae019","DOIUrl":"https://doi.org/10.1093/jigpal/jzae019","url":null,"abstract":"This paper investigates the Generalized Traveling Salesman Problem (GTSP), which is an extension of the well-known Traveling Salesman Problem (TSP), and it searches for an optimal tour in a clustered graph, such that every cluster is visited exactly once. In this paper, we describe a novel Memetic Algorithm (MA) for solving efficiently the GTSP. Our proposed MA has at its core a genetic algorithm (GA), completed by a Chromosome Enhancement Procedure (CEP), which is based on a TSP solver and the Shortest Path (SP) algorithm and for improving the convergence characteristics of the GA, a Local Search (LS) operation is applied for the best chromosomes in each generation. We tested our algorithm on a set of well-known instances from the literature and the achieved results prove that our novel memetic algorithm is highly competitive against the existing solution approaches from the specialized literature.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"72 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method for evaluating aircraft engine monitoring data is proposed. Commonly, prognostics and health management systems use knowledge of the degradation processes of certain engine components together with professional expert opinion to predict the Remaining Useful Life (RUL). New data-driven approaches have emerged to provide accurate diagnostics without relying on such costly processes. However, most of them lack an explanatory component to understand model learning and/or the nature of the data. A solution based on a novel recurrent version of a VAE is proposed in this paper to overcome this gap. The latent space learned by the model, trained with data from sensors placed in different parts of these engines, is exploited to build a self-explanatory map that can visually evaluate the rate of deterioration of the engines. Besides, a simple regressor model is built on top of the learned features of the encoder in order to numerically predict the RUL. As a result, remarkable prognostic accuracy is achieved, outperforming most of the novel and state-of-the-art approaches on the available modular aero-propulsion system simulation data (C-MAPSS dataset) from NASA. In addition, a practical real-world application is included for Turbofan engine data. This study shows that the proposed prognostic and explainable framework presents a promising new approach.
{"title":"Recurrent variational autoencoder approach for remaining useful life estimation","authors":"Nahuel Costa, Luciano Sánchez","doi":"10.1093/jigpal/jzae023","DOIUrl":"https://doi.org/10.1093/jigpal/jzae023","url":null,"abstract":"A new method for evaluating aircraft engine monitoring data is proposed. Commonly, prognostics and health management systems use knowledge of the degradation processes of certain engine components together with professional expert opinion to predict the Remaining Useful Life (RUL). New data-driven approaches have emerged to provide accurate diagnostics without relying on such costly processes. However, most of them lack an explanatory component to understand model learning and/or the nature of the data. A solution based on a novel recurrent version of a VAE is proposed in this paper to overcome this gap. The latent space learned by the model, trained with data from sensors placed in different parts of these engines, is exploited to build a self-explanatory map that can visually evaluate the rate of deterioration of the engines. Besides, a simple regressor model is built on top of the learned features of the encoder in order to numerically predict the RUL. As a result, remarkable prognostic accuracy is achieved, outperforming most of the novel and state-of-the-art approaches on the available modular aero-propulsion system simulation data (C-MAPSS dataset) from NASA. In addition, a practical real-world application is included for Turbofan engine data. This study shows that the proposed prognostic and explainable framework presents a promising new approach.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"7 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Pedro Llerena, Jesús García, José Manuel Molina
Ship-type identification in a maritime context can be critical to the authorities to control the activities being carried out. Although Automatic Identification Systems has been mandatory for certain vessels, if a vessel does not have them voluntarily or not, it can lead to a whole set of problems, which is why the use of tracking alternatives such as radar is fully complementary for a vessel monitoring systems. However, radars provide positions, but not what they are detecting. Having systems capable of adding categorical information to radar detections of vessels makes it possible to increase control of the activities being carried out, improve safety in maritime traffic, and optimize on-site inspection resources on the part of the authorities. This paper addresses the binary classification problem (fishing ships versus all other vessels) using unbalanced data from real vessel trajectories. It is performed from a deep learning approach comparing two of the main trends, Convolutional Neural Networks and Long Short-Term Memory. In this paper, it is proposed the weighted cross-entropy methodology and compared with classical data balancing strategies. Both networks show high performance when applying weighted cross-entropy compared with the classical machine learning approaches and classical balancing techniques. This work is shown to be a novel approach to the international problem of identifying fishing ships without context.
{"title":"LSTM vs CNN in real ship trajectory classification","authors":"Juan Pedro Llerena, Jesús García, José Manuel Molina","doi":"10.1093/jigpal/jzae027","DOIUrl":"https://doi.org/10.1093/jigpal/jzae027","url":null,"abstract":"Ship-type identification in a maritime context can be critical to the authorities to control the activities being carried out. Although Automatic Identification Systems has been mandatory for certain vessels, if a vessel does not have them voluntarily or not, it can lead to a whole set of problems, which is why the use of tracking alternatives such as radar is fully complementary for a vessel monitoring systems. However, radars provide positions, but not what they are detecting. Having systems capable of adding categorical information to radar detections of vessels makes it possible to increase control of the activities being carried out, improve safety in maritime traffic, and optimize on-site inspection resources on the part of the authorities. This paper addresses the binary classification problem (fishing ships versus all other vessels) using unbalanced data from real vessel trajectories. It is performed from a deep learning approach comparing two of the main trends, Convolutional Neural Networks and Long Short-Term Memory. In this paper, it is proposed the weighted cross-entropy methodology and compared with classical data balancing strategies. Both networks show high performance when applying weighted cross-entropy compared with the classical machine learning approaches and classical balancing techniques. This work is shown to be a novel approach to the international problem of identifying fishing ships without context.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"102 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Burduk, Grzegorz Bocewicz, Łukasz Łampika, Dagmara Łapczyńska, Kamil Musiał
The paper discusses the problem of assignment production resources in executing a production order on the example of the car rims manufacturing process. The more resources are involved in implementing the manufacturing process and the more they can be used interchangeably, the more complex and problematic the scheduling process becomes. Special attention is paid to the effective scheduling and assignment of rim machining operations to production stations in the considered manufacturing process. In this case, the use of traditional scheduling methods based on simple calculations, or the know-how of process engineers often turns out to be insufficient to achieve the intended results. Due to the scale of the problems faced in practice, the methods based on approximate approaches (Genetic and Tabu Search) were used to solve them. In this perspective, the problem under consideration involves the extension of the classic assignment problem with the possibility of taking into account: the times of operations, potential changeovers and the capacity of production resources.
{"title":"Tabu search and genetic algorithm in rims production process assignment","authors":"Anna Burduk, Grzegorz Bocewicz, Łukasz Łampika, Dagmara Łapczyńska, Kamil Musiał","doi":"10.1093/jigpal/jzae031","DOIUrl":"https://doi.org/10.1093/jigpal/jzae031","url":null,"abstract":"The paper discusses the problem of assignment production resources in executing a production order on the example of the car rims manufacturing process. The more resources are involved in implementing the manufacturing process and the more they can be used interchangeably, the more complex and problematic the scheduling process becomes. Special attention is paid to the effective scheduling and assignment of rim machining operations to production stations in the considered manufacturing process. In this case, the use of traditional scheduling methods based on simple calculations, or the know-how of process engineers often turns out to be insufficient to achieve the intended results. Due to the scale of the problems faced in practice, the methods based on approximate approaches (Genetic and Tabu Search) were used to solve them. In this perspective, the problem under consideration involves the extension of the classic assignment problem with the possibility of taking into account: the times of operations, potential changeovers and the capacity of production resources.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"51 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Gaviria de la Puerta, Iker Pastor-López, Alberto Tellaeche, Borja Sanz, Hugo Sanjurjo-González, Alfredo Cuzzocrea, Pablo G Bringas
Content-based authorship identification is an emerging research problem in online social media networks, due to a wide collection of issues ranging from security to privacy preservation, from radicalization to defamation detection, and so forth. Indeed, this research has attracted a relevant amount of attention from the research community during the past years. The general problem becomes harder when we consider the additional constraint of identifying the same false profile over different social media networks, under obvious considerations. Inspired by this emerging research challenge, in this paper we propose and experimentally assess an innovative framework for supporting content-based authorship identification and analysis in social media networks.
{"title":"An innovative framework for supporting content-based authorship identification and analysis in social media networks","authors":"José Gaviria de la Puerta, Iker Pastor-López, Alberto Tellaeche, Borja Sanz, Hugo Sanjurjo-González, Alfredo Cuzzocrea, Pablo G Bringas","doi":"10.1093/jigpal/jzae020","DOIUrl":"https://doi.org/10.1093/jigpal/jzae020","url":null,"abstract":"Content-based authorship identification is an emerging research problem in online social media networks, due to a wide collection of issues ranging from security to privacy preservation, from radicalization to defamation detection, and so forth. Indeed, this research has attracted a relevant amount of attention from the research community during the past years. The general problem becomes harder when we consider the additional constraint of identifying the same false profile over different social media networks, under obvious considerations. Inspired by this emerging research challenge, in this paper we propose and experimentally assess an innovative framework for supporting content-based authorship identification and analysis in social media networks.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"138 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated Guided Vehicles (AGV) are an essential element of transport in industry 4.0. Although they may seem simple systems in terms of their kinematics, their dynamics is very complex, and it requires robust and efficient controllers for their routes in the workspaces. In this paper, we present the design and implementation of an intelligent controller of a hybrid AGV based on fuzzy logic. In addition, genetic algorithms have been used to optimize the speed control strategy, aiming at improving efficiency and saving energy. The control architecture includes a fuzzy controller for trajectory tracking that has been enhanced with genetic algorithms. The cost function first maximizes the time in the circuit and then minimizes the guiding error. It has been validated on the mathematical model of a commercial hybrid AGV that merges tricycle and differential robot components. This model not only considers the kinematics and dynamics equations of the vehicle but also the impact of friction. The performance of the intelligent control strategy is compared with an optimized PID controller. Four paths were simulated to test the approach validity.
{"title":"AGV fuzzy control optimized by genetic algorithms","authors":"J Enrique Sierra-Garcia, Matilde Santos","doi":"10.1093/jigpal/jzae033","DOIUrl":"https://doi.org/10.1093/jigpal/jzae033","url":null,"abstract":"Automated Guided Vehicles (AGV) are an essential element of transport in industry 4.0. Although they may seem simple systems in terms of their kinematics, their dynamics is very complex, and it requires robust and efficient controllers for their routes in the workspaces. In this paper, we present the design and implementation of an intelligent controller of a hybrid AGV based on fuzzy logic. In addition, genetic algorithms have been used to optimize the speed control strategy, aiming at improving efficiency and saving energy. The control architecture includes a fuzzy controller for trajectory tracking that has been enhanced with genetic algorithms. The cost function first maximizes the time in the circuit and then minimizes the guiding error. It has been validated on the mathematical model of a commercial hybrid AGV that merges tricycle and differential robot components. This model not only considers the kinematics and dynamics equations of the vehicle but also the impact of friction. The performance of the intelligent control strategy is compared with an optimized PID controller. Four paths were simulated to test the approach validity.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"106 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Belén Vega-Márquez, Javier Solís-García, Isabel A Nepomuceno-Chamorro, Cristina Rubio-Escudero
Electricity is an indicator that shows the progress of a civilization; it is a product that has greatly changed the way we think about the world. Electricity price forecasting became a fundamental task in all countries due to the deregulation of the electricity market in the 1990s. This work examines the effectiveness of using multiple variables for price prediction given the large number of factors that could influence the price of the electricity market. The tests were carried out over four periods using data from Spain and deep learning models. Two different attribute selection methods based on Pearson’s correlation coefficient have been used to improve the efficiency of the training process. The variables used as input to the different prediction models were chosen, considering those most commonly used previously in the literature. This study attempts to test whether using time series lags improves the non-use of lags. The results obtained have shown that lags improve the results compared to a previous work in which no lags were used.
{"title":"A comparison of time series lags and non-lags in Spanish electricity price forecasting using data science models","authors":"Belén Vega-Márquez, Javier Solís-García, Isabel A Nepomuceno-Chamorro, Cristina Rubio-Escudero","doi":"10.1093/jigpal/jzae034","DOIUrl":"https://doi.org/10.1093/jigpal/jzae034","url":null,"abstract":"Electricity is an indicator that shows the progress of a civilization; it is a product that has greatly changed the way we think about the world. Electricity price forecasting became a fundamental task in all countries due to the deregulation of the electricity market in the 1990s. This work examines the effectiveness of using multiple variables for price prediction given the large number of factors that could influence the price of the electricity market. The tests were carried out over four periods using data from Spain and deep learning models. Two different attribute selection methods based on Pearson’s correlation coefficient have been used to improve the efficiency of the training process. The variables used as input to the different prediction models were chosen, considering those most commonly used previously in the literature. This study attempts to test whether using time series lags improves the non-use of lags. The results obtained have shown that lags improve the results compared to a previous work in which no lags were used.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"70 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}