Dragan Simić, Z. Bankovic, José R. Villar, J. Calvo-Rolle, V. Ilin, S. Simic, Svetlana Simić
Optimization, in general, is regarded as the process of finding optimal values for the variables of a given problem in order to minimize or maximize one or more objective function(s). Brain storm optimization (BSO) algorithm solves a complex optimization problem by mimicking the human idea generating process, in which a group of people solves a problem together. The aim of this paper is to present hybrid BSO algorithm solutions in the past 5 years. This study could be divided into two parts: strategies and applications. In the first part, different strategies for the hybrid BSO algorithms intended to improve the various ability of the original BSO algorithm are displayed. In the second part, the real-world applications in the past five years in optimization, prediction and feature selection processes are presented.
{"title":"Past five years on strategies and applications in hybrid brain storm optimization algorithms: a review","authors":"Dragan Simić, Z. Bankovic, José R. Villar, J. Calvo-Rolle, V. Ilin, S. Simic, Svetlana Simić","doi":"10.1093/jigpal/jzae051","DOIUrl":"https://doi.org/10.1093/jigpal/jzae051","url":null,"abstract":"\u0000 Optimization, in general, is regarded as the process of finding optimal values for the variables of a given problem in order to minimize or maximize one or more objective function(s). Brain storm optimization (BSO) algorithm solves a complex optimization problem by mimicking the human idea generating process, in which a group of people solves a problem together. The aim of this paper is to present hybrid BSO algorithm solutions in the past 5 years. This study could be divided into two parts: strategies and applications. In the first part, different strategies for the hybrid BSO algorithms intended to improve the various ability of the original BSO algorithm are displayed. In the second part, the real-world applications in the past five years in optimization, prediction and feature selection processes are presented.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141273011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce the notion of d-elements on precoherent preidempotent quantale (PIQ), construct Zariski topology on $Max(Q_{d})$ and explore its various properties. Firstly, we give a sufficient condition of a topological space $Max(Q_{d})$ being Hausdorff. Secondly, we prove that if $ P=mathfrak{B}(P) $ and $ Q=mathfrak{B}(Q) $, then $P$ is isomorphic to $Q$ iff $ Max(P_{d}) $ is homeomorphic to $ Max(Q_{d}) $. Moreover, we prove that $ (Potimes Q)_{d} $ is isomorphic to $ P_{d} otimes Q_{d} $ iff $ P_{d} otimes Q_{d}=(P_{d} otimes Q_{d})_{d} $. Finally, we prove that the category $ textbf{dPFrm} $ is a reflective subcategory of $textbf{PIQuant}.$
{"title":"The d-elements of precoherent preidempotent quantales and their applications","authors":"Xianglong Ruan","doi":"10.1093/jigpal/jzae063","DOIUrl":"https://doi.org/10.1093/jigpal/jzae063","url":null,"abstract":"\u0000 In this paper, we introduce the notion of d-elements on precoherent preidempotent quantale (PIQ), construct Zariski topology on $Max(Q_{d})$ and explore its various properties. Firstly, we give a sufficient condition of a topological space $Max(Q_{d})$ being Hausdorff. Secondly, we prove that if $ P=mathfrak{B}(P) $ and $ Q=mathfrak{B}(Q) $, then $P$ is isomorphic to $Q$ iff $ Max(P_{d}) $ is homeomorphic to $ Max(Q_{d}) $. Moreover, we prove that $ (Potimes Q)_{d} $ is isomorphic to $ P_{d} otimes Q_{d} $ iff $ P_{d} otimes Q_{d}=(P_{d} otimes Q_{d})_{d} $. Finally, we prove that the category $ textbf{dPFrm} $ is a reflective subcategory of $textbf{PIQuant}.$","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141268694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Á. González de la Torre, L. H. Encinas, J. I. S. GarcÍa
Code-based cryptography is currently the second most promising post-quantum mathematical tool for quantum-resistant algorithms. Since in 2022 the first post-quantum standard Key Encapsulation Mechanism, Kyber (a latticed-based algorithm), was selected to be established as standard, and after that the National Institute of Standards and Technology post-quantum standardization call focused in code-based cryptosystems. Three of the four candidates that remain in the fourth round are code-based algorithms. In fact, the only non-code-based algorithm (SIKE) is now considered vulnerable. Due to this landscape, it is crucial to update previous results about these algorithms and their functioning. The Fujisaki-Okamoto transformation is a key part of the study of post-quantum algorithms and in this work we focus our analysis on Classic McEliece, BIKE and HQC proposals, and how they apply this transformation to obtain IND-CCA semantic security. Since after security the most important parameter in the evaluation of the algorithms is performance, we have compared the performance of the code-based algorithms of the NIST call considering the same architecture for all of them.
{"title":"Structural analysis of code-based algorithms of the NIST post-quantum call","authors":"M. Á. González de la Torre, L. H. Encinas, J. I. S. GarcÍa","doi":"10.1093/jigpal/jzae071","DOIUrl":"https://doi.org/10.1093/jigpal/jzae071","url":null,"abstract":"\u0000 Code-based cryptography is currently the second most promising post-quantum mathematical tool for quantum-resistant algorithms. Since in 2022 the first post-quantum standard Key Encapsulation Mechanism, Kyber (a latticed-based algorithm), was selected to be established as standard, and after that the National Institute of Standards and Technology post-quantum standardization call focused in code-based cryptosystems. Three of the four candidates that remain in the fourth round are code-based algorithms. In fact, the only non-code-based algorithm (SIKE) is now considered vulnerable. Due to this landscape, it is crucial to update previous results about these algorithms and their functioning. The Fujisaki-Okamoto transformation is a key part of the study of post-quantum algorithms and in this work we focus our analysis on Classic McEliece, BIKE and HQC proposals, and how they apply this transformation to obtain IND-CCA semantic security. Since after security the most important parameter in the evaluation of the algorithms is performance, we have compared the performance of the code-based algorithms of the NIST call considering the same architecture for all of them.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141269301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine stability and energy efficiency have become major issues in the manufacturing industry, primarily during the COVID-19 pandemic where fluctuations in supply and demand were common. As a result, Predictive Maintenance (PdM) has become more desirable, since predicting failures ahead of time allows to avoid downtime and improves stability and energy efficiency in machines. One type of machine failure stands out due to its impact, machine overstrain, which can occur when machines are used beyond their tolerable limit. From the current literature, there are little to no relevant works that focus on machine overstrain failure detection or prediction. Accordingly, the purpose of this paper is to implement and compare four Machine Learning (ML) algorithms for PdM applied to machine overstrain failures: Artificial Neural Network (ANN), Gradient Boosting, Random Forest and Support Vector Machine (SVM). Moreover, it proposes a training methodology for imbalanced data and the automatic optimization of hyperparameters, which aims to improve performance in the ML models. To evaluate the performance of the ML models, a synthetic dataset that simulates industrial machine data is used. The obtained results show the robustness of the proposed methodology, with the ANN and SVM models achieving a perfect recall score, with 98.95% and 98.85% in accuracy, respectively.
{"title":"Machine overstrain prediction for early detection and effective maintenance: A machine learning algorithm comparison","authors":"Bruno Mota, Pedro Faria, Carlos Ramos","doi":"10.1093/jigpal/jzae055","DOIUrl":"https://doi.org/10.1093/jigpal/jzae055","url":null,"abstract":"Machine stability and energy efficiency have become major issues in the manufacturing industry, primarily during the COVID-19 pandemic where fluctuations in supply and demand were common. As a result, Predictive Maintenance (PdM) has become more desirable, since predicting failures ahead of time allows to avoid downtime and improves stability and energy efficiency in machines. One type of machine failure stands out due to its impact, machine overstrain, which can occur when machines are used beyond their tolerable limit. From the current literature, there are little to no relevant works that focus on machine overstrain failure detection or prediction. Accordingly, the purpose of this paper is to implement and compare four Machine Learning (ML) algorithms for PdM applied to machine overstrain failures: Artificial Neural Network (ANN), Gradient Boosting, Random Forest and Support Vector Machine (SVM). Moreover, it proposes a training methodology for imbalanced data and the automatic optimization of hyperparameters, which aims to improve performance in the ML models. To evaluate the performance of the ML models, a synthetic dataset that simulates industrial machine data is used. The obtained results show the robustness of the proposed methodology, with the ANN and SVM models achieving a perfect recall score, with 98.95% and 98.85% in accuracy, respectively.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141167699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let $varSigma $ be a signature without $0$-ary operation symbols and $textsf{Sl}$ the category of semilattices. Then, after defining and investigating the categories $int ^{textsf{Sl}}textrm{Isys}_{varSigma }$, of inductive systems of $varSigma $-algebras over all semilattices, which are ordered pairs $mathscr{A}= (textbf{I},mathscr{A})$ where $textbf{I}$ is a semilattice and $mathscr{A}$ an inductive system of $varSigma $-algebras relative to $textbf{I}$, and PłAlg$ (varSigma )$, of Płonka $varSigma $-algebras, which are ordered pairs $(textbf{A},D)$ where $textbf{A}$ is a $varSigma $-algebra and $D$ a Płonka operator for $textbf{A}$, i.e. in the traditional terminology, a partition function for $textbf{A}$, we prove the main result of the paper: There exists an adjunction, which we call the Płonka adjunction, from $int ^{textsf{Sl}}textrm{Isys}_{varSigma }$ to PłAlg$ (varSigma )$.
{"title":"Płonka adjunction","authors":"J Climent Vidal, E Cosme Llópez","doi":"10.1093/jigpal/jzae064","DOIUrl":"https://doi.org/10.1093/jigpal/jzae064","url":null,"abstract":"Let $varSigma $ be a signature without $0$-ary operation symbols and $textsf{Sl}$ the category of semilattices. Then, after defining and investigating the categories $int ^{textsf{Sl}}textrm{Isys}_{varSigma }$, of inductive systems of $varSigma $-algebras over all semilattices, which are ordered pairs $mathscr{A}= (textbf{I},mathscr{A})$ where $textbf{I}$ is a semilattice and $mathscr{A}$ an inductive system of $varSigma $-algebras relative to $textbf{I}$, and PłAlg$ (varSigma )$, of Płonka $varSigma $-algebras, which are ordered pairs $(textbf{A},D)$ where $textbf{A}$ is a $varSigma $-algebra and $D$ a Płonka operator for $textbf{A}$, i.e. in the traditional terminology, a partition function for $textbf{A}$, we prove the main result of the paper: There exists an adjunction, which we call the Płonka adjunction, from $int ^{textsf{Sl}}textrm{Isys}_{varSigma }$ to PłAlg$ (varSigma )$.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141151282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generation of the pitch control signal in a wind turbine (WT) is not straightforward due to the nonlinear dynamics of the system and the coupling of its internal variables; in addition, they are subjected to the uncertainty that comes from the random nature of the wind. Fuzzy logic has proved useful in applications with changing system parameters or where uncertainty is relevant as in this one, but the tuning of the fuzzy logic controller (FLC) parameters is neither straightforward nor an easy task. On the other hand, reinforcement learning (RL) allows systems to automatically learn, and this capability can be exploited to tune the FLC. In this work, a WT pitch control architecture that uses RL to tune the membership functions and scale the output of a fuzzy controller is proposed. The RL strategy calculates the fuzzy controller gains in order to reduce the output power error of the WT according to the wind speed. Different reward mechanisms based on the output power error have been considered. Simulation results with different wind profiles show that this architecture performs better (123.7 W) in terms of power errors than an FLC without RL (133.2 W) or a simpler PID (208.8 W). Even more, it provides a smooth response and outperforms other hybrid controllers such as RL-PID and radial basis function neural network control.
{"title":"Combination of fuzzy control and reinforcement learning for wind turbine pitch control","authors":"J Enrique Sierra-Garcia, Matilde Santos","doi":"10.1093/jigpal/jzae054","DOIUrl":"https://doi.org/10.1093/jigpal/jzae054","url":null,"abstract":"The generation of the pitch control signal in a wind turbine (WT) is not straightforward due to the nonlinear dynamics of the system and the coupling of its internal variables; in addition, they are subjected to the uncertainty that comes from the random nature of the wind. Fuzzy logic has proved useful in applications with changing system parameters or where uncertainty is relevant as in this one, but the tuning of the fuzzy logic controller (FLC) parameters is neither straightforward nor an easy task. On the other hand, reinforcement learning (RL) allows systems to automatically learn, and this capability can be exploited to tune the FLC. In this work, a WT pitch control architecture that uses RL to tune the membership functions and scale the output of a fuzzy controller is proposed. The RL strategy calculates the fuzzy controller gains in order to reduce the output power error of the WT according to the wind speed. Different reward mechanisms based on the output power error have been considered. Simulation results with different wind profiles show that this architecture performs better (123.7 W) in terms of power errors than an FLC without RL (133.2 W) or a simpler PID (208.8 W). Even more, it provides a smooth response and outperforms other hybrid controllers such as RL-PID and radial basis function neural network control.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141151280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noémi Gaskó, M. Suciu, Rodica Ioana Lung, Tamás Képes
The critical node detection problem is a central task in computational graph theory due to its large applicability, consisting in deleting $k$ nodes to minimize a certain graph measure. In this article, we propose a new Extremal Optimization-based approach, the Pseudo-Deterministic Noisy Extremal Optimization (PDNEO) algorithm, to solve the Critical Node Detection variant in which the pairwise connectivity is minimized. PDNEO uses an adaptive pseudo-deterministic parameter to switch between random nodes and articulation points during the search, as well as other features, such as noise induction to preserve diversity, greedy search to better exploit the search space and a greater search space exploration mechanism. Numerical experiments on synthetic and real-world networks show the effectiveness of the proposed algorithm compared with existing methods.
{"title":"A Pseudo-Deterministic Noisy Extremal Optimization algorithm for the pairwise connectivity Critical Node Detection Problem","authors":"Noémi Gaskó, M. Suciu, Rodica Ioana Lung, Tamás Képes","doi":"10.1093/jigpal/jzae056","DOIUrl":"https://doi.org/10.1093/jigpal/jzae056","url":null,"abstract":"\u0000 The critical node detection problem is a central task in computational graph theory due to its large applicability, consisting in deleting $k$ nodes to minimize a certain graph measure. In this article, we propose a new Extremal Optimization-based approach, the Pseudo-Deterministic Noisy Extremal Optimization (PDNEO) algorithm, to solve the Critical Node Detection variant in which the pairwise connectivity is minimized. PDNEO uses an adaptive pseudo-deterministic parameter to switch between random nodes and articulation points during the search, as well as other features, such as noise induction to preserve diversity, greedy search to better exploit the search space and a greater search space exploration mechanism. Numerical experiments on synthetic and real-world networks show the effectiveness of the proposed algorithm compared with existing methods.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virginia Riego del Castillo, Lidia Sánchez-González, Laura Fernández, Ruben Rebollar, E. Samperio
Accurate measurement of livestock weight is a primary indicator in the meat industry to increase the economic gain. In lambs, the weight of a live animal is still usually estimated manually using traditional scales, resulting in a tedious process for the experienced assessor and stressful for the animal. In this paper, we propose a solution to this problem using computer vision techniques; thus, the proposed procedure estimates the weight of a lamb by analysing its zenithal image without interacting with the animal, which speeds up the process and reduces weighing costs. It is based on a data-driven decision support system that uses RGB-D machine vision techniques and regression models. Unlike existing methods, it does not require walk-over-weighing platforms or special and expensive infrastructures. The proposed method includes a decision support system that automatically rejects those images that are not appropriate to estimate the lamb weight. After determining the body contour of the lamb, we compute several features that feed different regression models. Best results were achieved with Extra Tree Regression ($R^{2}$=91.94%), outperforming the existing techniques. Using only an image, the proposed approach can identify with a minimum error the optimal weight of a lamb to be slaughtered, so as to maximise the economic profit.
准确测量牲畜体重是肉类行业提高经济效益的首要指标。对于羔羊,活体动物的重量通常仍由人工使用传统的秤来估算,这对经验丰富的评估员来说是一个繁琐的过程,对动物来说也是一种压力。在本文中,我们提出了一种利用计算机视觉技术解决这一问题的方法;因此,所提出的程序通过分析羔羊的天顶图像来估算其重量,而无需与动物进行互动,从而加快了整个过程并降低了称重成本。它基于数据驱动的决策支持系统,使用 RGB-D 机器视觉技术和回归模型。与现有方法不同的是,它不需要行走式称重平台或昂贵的特殊基础设施。所提出的方法包括一个决策支持系统,可自动剔除那些不适合估算羔羊体重的图像。在确定羔羊的身体轮廓后,我们计算出几个特征,为不同的回归模型提供信息。Extra Tree 回归模型取得了最佳结果($R^{2}$=91.94%),优于现有技术。仅使用一张图像,所提出的方法就能以最小的误差确定待屠宰羔羊的最佳重量,从而获得最大的经济收益。
{"title":"A non-stressful vision-based method for weighing live lambs","authors":"Virginia Riego del Castillo, Lidia Sánchez-González, Laura Fernández, Ruben Rebollar, E. Samperio","doi":"10.1093/jigpal/jzae059","DOIUrl":"https://doi.org/10.1093/jigpal/jzae059","url":null,"abstract":"\u0000 Accurate measurement of livestock weight is a primary indicator in the meat industry to increase the economic gain. In lambs, the weight of a live animal is still usually estimated manually using traditional scales, resulting in a tedious process for the experienced assessor and stressful for the animal. In this paper, we propose a solution to this problem using computer vision techniques; thus, the proposed procedure estimates the weight of a lamb by analysing its zenithal image without interacting with the animal, which speeds up the process and reduces weighing costs. It is based on a data-driven decision support system that uses RGB-D machine vision techniques and regression models. Unlike existing methods, it does not require walk-over-weighing platforms or special and expensive infrastructures. The proposed method includes a decision support system that automatically rejects those images that are not appropriate to estimate the lamb weight. After determining the body contour of the lamb, we compute several features that feed different regression models. Best results were achieved with Extra Tree Regression ($R^{2}$=91.94%), outperforming the existing techniques. Using only an image, the proposed approach can identify with a minimum error the optimal weight of a lamb to be slaughtered, so as to maximise the economic profit.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier González-Enrique, María Inmaculada RodrÍguez-GarcÍa, Juan Jesús Ruiz-Aguilar, MarÍa Gema Carrasco-GarcÍa, Ivan Felis Enguix, Ignacio J Turias
The objective of this research is to develop accurate forecasting models for chlorophyll-α concentrations at various depths in El Mar Menor, Spain. Chlorophyll-α plays a crucial role in assessing eutrophication in this vulnerable ecosystem. To achieve this objective, various deep learning forecasting techniques, including long short-term memory, bidirectional long short-term memory and gated recurrent uni networks, were utilized. The models were designed to forecast the chlorophyll-α levels with a 2-week prediction horizon. To enhance the models’ accuracy, a sliding window method combined with a blocked cross-validation procedure for time series was also applied to these techniques. Two input strategies were also tested in this approach: using only chlorophyll-α time series and incorporating exogenous variables. The proposed approach significantly improved the accuracy of the predictive models, no matter the forecasting technique employed. Results were remarkable, with $overline{sigma}$ values reaching approximately 0.90 for the 0.5-m depth level and 0.80 for deeper levels. The proposed forecasting models and methodologies have great potential for predicting eutrophication episodes and acting as decision-making tools for environmental agencies. Accurate prediction of eutrophication episodes through these models could allow for proactive measures to be implemented, resulting in improved environmental management and the preservation of the ecosystem.
{"title":"Chlorophyll-α forecasting using LSTM, bidirectional LSTM and GRU networks in El Mar Menor (Spain)","authors":"Javier González-Enrique, María Inmaculada RodrÍguez-GarcÍa, Juan Jesús Ruiz-Aguilar, MarÍa Gema Carrasco-GarcÍa, Ivan Felis Enguix, Ignacio J Turias","doi":"10.1093/jigpal/jzae046","DOIUrl":"https://doi.org/10.1093/jigpal/jzae046","url":null,"abstract":"The objective of this research is to develop accurate forecasting models for chlorophyll-α concentrations at various depths in El Mar Menor, Spain. Chlorophyll-α plays a crucial role in assessing eutrophication in this vulnerable ecosystem. To achieve this objective, various deep learning forecasting techniques, including long short-term memory, bidirectional long short-term memory and gated recurrent uni networks, were utilized. The models were designed to forecast the chlorophyll-α levels with a 2-week prediction horizon. To enhance the models’ accuracy, a sliding window method combined with a blocked cross-validation procedure for time series was also applied to these techniques. Two input strategies were also tested in this approach: using only chlorophyll-α time series and incorporating exogenous variables. The proposed approach significantly improved the accuracy of the predictive models, no matter the forecasting technique employed. Results were remarkable, with $overline{sigma}$ values reaching approximately 0.90 for the 0.5-m depth level and 0.80 for deeper levels. The proposed forecasting models and methodologies have great potential for predicting eutrophication episodes and acting as decision-making tools for environmental agencies. Accurate prediction of eutrophication episodes through these models could allow for proactive measures to be implemented, resulting in improved environmental management and the preservation of the ecosystem.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141062326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Machado, António Chaves, Larissa Montenegro, Carlos Alves, Dalila Durães, Ricardo Machado, Paulo Novais
The significance of energy efficiency in the development of smart cities cannot be overstated. It is essential to have a clear understanding of the current energy consumption (EC) patterns in both public and private buildings. One way to achieve this is by employing machine learning classification algorithms, which offer a broader perspective on the factors influencing EC. These algorithms can be applied to real data from databases, making them valuable tools for smart city applications. In this paper, our focus is specifically on the EC of public schools in a Portuguese city, as this plays a crucial role in designing a Smart City. By utilizing a comprehensive dataset on school EC, we thoroughly evaluate multiple ML algorithms. The objective is to identify the most effective algorithm for classifying average EC patterns. The outcomes of this study hold significant value for school administrators and facility managers. By leveraging the predictions generated from the selected algorithm, they can optimize energy usage and, consequently, reduce costs. The use of a comprehensive dataset ensures the reliability and accuracy of our evaluations of various ML algorithms for EC classification.
能源效率对智慧城市发展的重要性怎么强调都不为过。清楚地了解当前公共建筑和私人建筑的能源消耗(EC)模式至关重要。实现这一目标的方法之一是采用机器学习分类算法,这种算法能从更广阔的视角来分析影响能耗的因素。这些算法可应用于数据库中的真实数据,使其成为智慧城市应用的重要工具。在本文中,我们特别关注葡萄牙某城市公立学校的EC,因为这在设计智慧城市中起着至关重要的作用。通过利用有关学校教育质量的综合数据集,我们对多种 ML 算法进行了全面评估。我们的目标是找出最有效的算法,对平均EC模式进行分类。这项研究的成果对学校管理人员和设施管理者具有重要价值。通过利用所选算法生成的预测结果,他们可以优化能源使用,从而降低成本。全面数据集的使用确保了我们对用于EC分类的各种ML算法进行评估的可靠性和准确性。
{"title":"Behaviour of Machine Learning algorithms in the classification of energy consumption in school buildings","authors":"José Machado, António Chaves, Larissa Montenegro, Carlos Alves, Dalila Durães, Ricardo Machado, Paulo Novais","doi":"10.1093/jigpal/jzae058","DOIUrl":"https://doi.org/10.1093/jigpal/jzae058","url":null,"abstract":"The significance of energy efficiency in the development of smart cities cannot be overstated. It is essential to have a clear understanding of the current energy consumption (EC) patterns in both public and private buildings. One way to achieve this is by employing machine learning classification algorithms, which offer a broader perspective on the factors influencing EC. These algorithms can be applied to real data from databases, making them valuable tools for smart city applications. In this paper, our focus is specifically on the EC of public schools in a Portuguese city, as this plays a crucial role in designing a Smart City. By utilizing a comprehensive dataset on school EC, we thoroughly evaluate multiple ML algorithms. The objective is to identify the most effective algorithm for classifying average EC patterns. The outcomes of this study hold significant value for school administrators and facility managers. By leveraging the predictions generated from the selected algorithm, they can optimize energy usage and, consequently, reduce costs. The use of a comprehensive dataset ensures the reliability and accuracy of our evaluations of various ML algorithms for EC classification.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141062351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}