To solve the problems of localization and identification of fish in the complex fishway environment, improving the accuracy of fish detection, this paper proposes an object detection algorithm YOLORG, and a fishway fish detection dataset (FFDD). The FFDD contains 4,591 images from the web and lab shots and labeled with the LabelIMG tool, covering fish in a wide range of complex scenarios. The YOLORG algorithm, based on YOLOv8, improves the traditional FPN–PAN network into a C2f Multi-scale feature fusion network with a Gather-and-Distribute mechanism, which solves the problem of information loss accompanied by the network in the fusion of feature maps of different sizes. Also, we propose a C2D Structural Re-parameterization module with a rich gradient flow and good performance to further improve the detection accuracy of the algorithm. The experimental results show that the YOLORG algorithm improves the mAP50 and mAP50-95 by 1.2 and 1.8% compared to the original network under the joint VOC dataset, and also performs very well in terms of accuracy compared to other state-of-the-art object detection algorithms, and is able to detect fish in very turbid environments after training on the FFDD.
{"title":"Fish detection based on Gather-and-Distribute mechanism Multi-scale feature fusion network and Structural Re-parameterization method","authors":"Dengyong Zhang, Sheng Gao, Bin Deng, Jihan Xu, Yifei Xiang, Maohui Gan, Chaoxiong Qu","doi":"10.2166/hydro.2024.034","DOIUrl":"https://doi.org/10.2166/hydro.2024.034","url":null,"abstract":"\u0000 \u0000 To solve the problems of localization and identification of fish in the complex fishway environment, improving the accuracy of fish detection, this paper proposes an object detection algorithm YOLORG, and a fishway fish detection dataset (FFDD). The FFDD contains 4,591 images from the web and lab shots and labeled with the LabelIMG tool, covering fish in a wide range of complex scenarios. The YOLORG algorithm, based on YOLOv8, improves the traditional FPN–PAN network into a C2f Multi-scale feature fusion network with a Gather-and-Distribute mechanism, which solves the problem of information loss accompanied by the network in the fusion of feature maps of different sizes. Also, we propose a C2D Structural Re-parameterization module with a rich gradient flow and good performance to further improve the detection accuracy of the algorithm. The experimental results show that the YOLORG algorithm improves the mAP50 and mAP50-95 by 1.2 and 1.8% compared to the original network under the joint VOC dataset, and also performs very well in terms of accuracy compared to other state-of-the-art object detection algorithms, and is able to detect fish in very turbid environments after training on the FFDD.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140669353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyed Abolfazl Hashemi, A. Robati, Mehdi Momeni Raghabadi, Mohammadali Kazerooni Sadi
This study aimed to determine the optimal conjunctive utilization of groundwater and surface water resources in the Halil River basin, one of the most significant study regions in Kerman Province (Iran). Multi-verse optimizer (MVO) and the ANFIS (adaptive neuro-fuzzy inference systems) simulation model, known as the MVO–ANFIS simulation–optimization model, were used for this purpose. Moreover, the optimal exploitation policy for the studied basin was presented. The ANFIS model yielded a coefficient of determination greater than 0.99 in Baft, Rabor, and Jiroft.This model had a high capability to simulate groundwater levels in these three regions. Therefore, the ANFIS model was adopted as the simulation model to predict the aquifer water table in these regions. Regarding the conjunctive utilization of groundwater and surface water resources, the exploitation policy resulting from the MVO–ANFIS simulation–optimization model had a desirable performance by supplying 91.70, 87.75, and 97.58% of the total demands of Baft, Rabor, and Jiroft, respectively. Moreover, results of water system performance indicators, including reliability (82.96, 72.65, 95.07), resiliency (70, 53.47, 80), vulnerability (29.54, 25.64, 17.02), and sustainability (74.24%, 66.10%, 85.78%) in the mentioned regions, respectively, showed the appropriate performance of the proposed model for the simulation–optimization problem.
{"title":"Conjunctive management of groundwater and surface water resources using a hybrid simulation–optimization method","authors":"Seyed Abolfazl Hashemi, A. Robati, Mehdi Momeni Raghabadi, Mohammadali Kazerooni Sadi","doi":"10.2166/hydro.2024.220","DOIUrl":"https://doi.org/10.2166/hydro.2024.220","url":null,"abstract":"\u0000 This study aimed to determine the optimal conjunctive utilization of groundwater and surface water resources in the Halil River basin, one of the most significant study regions in Kerman Province (Iran). Multi-verse optimizer (MVO) and the ANFIS (adaptive neuro-fuzzy inference systems) simulation model, known as the MVO–ANFIS simulation–optimization model, were used for this purpose. Moreover, the optimal exploitation policy for the studied basin was presented. The ANFIS model yielded a coefficient of determination greater than 0.99 in Baft, Rabor, and Jiroft.This model had a high capability to simulate groundwater levels in these three regions. Therefore, the ANFIS model was adopted as the simulation model to predict the aquifer water table in these regions. Regarding the conjunctive utilization of groundwater and surface water resources, the exploitation policy resulting from the MVO–ANFIS simulation–optimization model had a desirable performance by supplying 91.70, 87.75, and 97.58% of the total demands of Baft, Rabor, and Jiroft, respectively. Moreover, results of water system performance indicators, including reliability (82.96, 72.65, 95.07), resiliency (70, 53.47, 80), vulnerability (29.54, 25.64, 17.02), and sustainability (74.24%, 66.10%, 85.78%) in the mentioned regions, respectively, showed the appropriate performance of the proposed model for the simulation–optimization problem.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140681421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiaoqiao Yan, Bingsong Zhang, Yi Jiang, Ying Liu, Bin Yang, Haijun Wang
Rain gauge networks provide direct precipitation measurements and have been widely used in hydrology, synoptic-scale meteorology, and climatology. However, rain gauge observations are subject to a variety of error sources, and quality control (QC) is required to ensure the reasonable use. In order to enhance the automatic detection ability of anomalies in data, the novel multi-source data quality control (NMQC) method is proposed for hourly rain gauge data. It employs a phased strategy to reduce the misjudgment risk caused by the uncertainty from radar and satellite remote-sensing measurements. NMQC is applied for the QC of hourly gauge data from more than 24,000 hydro-meteorological stations in the Yangtze River basin in 2020. The results show that its detection ratio of anomalous data is 1.73‰, only 1.73% of which are suspicious data needing to be confirmed by experts. Moreover, the distribution characteristics of anomaly data are consistent with the climatic characteristics of the study region as well as the measurement and maintenance modes of rain gauges. Overall, NMQC has a strong ability to label anomaly data automatically, while identifying a lower proportion of suspicious data. It can greatly reduce manual intervention and shorten the impact time of anomaly data in the operational work.
{"title":"Quality control of hourly rain gauge data based on radar and satellite multi-source data","authors":"Qiaoqiao Yan, Bingsong Zhang, Yi Jiang, Ying Liu, Bin Yang, Haijun Wang","doi":"10.2166/hydro.2024.272","DOIUrl":"https://doi.org/10.2166/hydro.2024.272","url":null,"abstract":"\u0000 \u0000 Rain gauge networks provide direct precipitation measurements and have been widely used in hydrology, synoptic-scale meteorology, and climatology. However, rain gauge observations are subject to a variety of error sources, and quality control (QC) is required to ensure the reasonable use. In order to enhance the automatic detection ability of anomalies in data, the novel multi-source data quality control (NMQC) method is proposed for hourly rain gauge data. It employs a phased strategy to reduce the misjudgment risk caused by the uncertainty from radar and satellite remote-sensing measurements. NMQC is applied for the QC of hourly gauge data from more than 24,000 hydro-meteorological stations in the Yangtze River basin in 2020. The results show that its detection ratio of anomalous data is 1.73‰, only 1.73% of which are suspicious data needing to be confirmed by experts. Moreover, the distribution characteristics of anomaly data are consistent with the climatic characteristics of the study region as well as the measurement and maintenance modes of rain gauges. Overall, NMQC has a strong ability to label anomaly data automatically, while identifying a lower proportion of suspicious data. It can greatly reduce manual intervention and shorten the impact time of anomaly data in the operational work.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140689177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum: Journal of Hydroinformatics 1 November 2022; 24 (6): 1234–1253. Fluid transient analysis by the method of characteristics using an object-oriented simulation tool, Ramón Pérez, Sebastián Dormido, doi: https://dx.doi.org/10.2166/hydro.2022.067","authors":"","doi":"10.2166/hydro.2024.003","DOIUrl":"https://doi.org/10.2166/hydro.2024.003","url":null,"abstract":"","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140693516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present study aims to evaluate the potentiality of Bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Networks (CNNs), eXtreme Gradient Boosting (XGBoost), Light Gradient Boosting Mechine (LGBM), and Random Forest (RF) for predicting daily inflows to the Sri Ram Sagar Project (SRSP), Telangana, India. Inputs to the model are rainfall, evaporation, time lag inflows, and climate indices. Seven combinations (S1–S7) of inputs were made. Fifteen and a half years of data were considered, out of which 11 years were used for training. Hyperparameter tuning is performed with the Tree-Structured Parzen Estimator. The performance of the algorithms is assessed using Kling–Gupta efficiency (KGE). Results indicate that Bi-LSTM with combination S7 performed better than others, as evident from KGE values of 0.92 and 0.87 during the training and testing, respectively. Furthermore, Stacking Ensemble Mechanism (SEM) has also been employed to ascertain its efficacy over other chosen algorithms, resulting in KGE values of 0.94 and 0.89 during training and testing. It has also been able to simulate peak inflow events satisfactorily. Thus, SEM is a better alternative for reservoir inflow predictions.
{"title":"Daily reservoir inflow prediction using stacking ensemble of machine learning algorithms","authors":"Deepjyoti Deb, Vasan Arunachalam, K. S. Raju","doi":"10.2166/hydro.2024.210","DOIUrl":"https://doi.org/10.2166/hydro.2024.210","url":null,"abstract":"\u0000 \u0000 The present study aims to evaluate the potentiality of Bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Networks (CNNs), eXtreme Gradient Boosting (XGBoost), Light Gradient Boosting Mechine (LGBM), and Random Forest (RF) for predicting daily inflows to the Sri Ram Sagar Project (SRSP), Telangana, India. Inputs to the model are rainfall, evaporation, time lag inflows, and climate indices. Seven combinations (S1–S7) of inputs were made. Fifteen and a half years of data were considered, out of which 11 years were used for training. Hyperparameter tuning is performed with the Tree-Structured Parzen Estimator. The performance of the algorithms is assessed using Kling–Gupta efficiency (KGE). Results indicate that Bi-LSTM with combination S7 performed better than others, as evident from KGE values of 0.92 and 0.87 during the training and testing, respectively. Furthermore, Stacking Ensemble Mechanism (SEM) has also been employed to ascertain its efficacy over other chosen algorithms, resulting in KGE values of 0.94 and 0.89 during training and testing. It has also been able to simulate peak inflow events satisfactorily. Thus, SEM is a better alternative for reservoir inflow predictions.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140697669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Ribalta Gené, Ramón Béjar, Carles Mateu, Lluís Corominas, Oscar Esbrí, Edgar Rubión
Sediment accumulation in the sewer is a source of cascading problems if left unattended and untreated, causing pipe failures, blockages, flooding, or odour problems. Good maintenance scheduling reduces dangerous incidents, but it also has financial and human costs. In this paper, we propose a predictive model to support the management of maintenance routines and reduce cost expenditure. The solution is based on an architecture composed of an autoencoder and a feedforward neural network that classifies the future sediment deposition. The autoencoder serves as a feature reduction component that receives the physical properties of a sewer section and reduces them into a smaller number of variables, which compress the most important information, reducing data uncertainty. Afterwards, the feedforward neural network receives this compressed information together with rain and maintenance data, using all of them to classify the sediment deposition in four thresholds: more than 5, 10, 15, and 20% sediment deposition. We use the architecture to train four different classification models, with the best score from the 5% threshold, being 82% accuracy, 70% precision, 76% specificity, and 88% sensitivity. By combining the classifications obtained with the four models, the solution delivers a final indicator that categorizes the deposited sediment into clearly defined ranges.
{"title":"Sewer sediment deposition prediction using a two-stage machine learning solution","authors":"Marc Ribalta Gené, Ramón Béjar, Carles Mateu, Lluís Corominas, Oscar Esbrí, Edgar Rubión","doi":"10.2166/hydro.2024.144","DOIUrl":"https://doi.org/10.2166/hydro.2024.144","url":null,"abstract":"\u0000 \u0000 Sediment accumulation in the sewer is a source of cascading problems if left unattended and untreated, causing pipe failures, blockages, flooding, or odour problems. Good maintenance scheduling reduces dangerous incidents, but it also has financial and human costs. In this paper, we propose a predictive model to support the management of maintenance routines and reduce cost expenditure. The solution is based on an architecture composed of an autoencoder and a feedforward neural network that classifies the future sediment deposition. The autoencoder serves as a feature reduction component that receives the physical properties of a sewer section and reduces them into a smaller number of variables, which compress the most important information, reducing data uncertainty. Afterwards, the feedforward neural network receives this compressed information together with rain and maintenance data, using all of them to classify the sediment deposition in four thresholds: more than 5, 10, 15, and 20% sediment deposition. We use the architecture to train four different classification models, with the best score from the 5% threshold, being 82% accuracy, 70% precision, 76% specificity, and 88% sensitivity. By combining the classifications obtained with the four models, the solution delivers a final indicator that categorizes the deposited sediment into clearly defined ranges.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140709456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Dodig, Elisa Ricci, Goran Kvaščev, Milan Stojković
Water quality prediction is crucial for effective river stream management. Dissolved oxygen, conductivity and chemical oxygen demand are vital chemical parameters for water quality. Development of machine learning (ML) and deep learning (DL) methods made them widely used in this domain. Sophisticated DL techniques, especially long short-term memory (LSTM) networks, are required for accurate, real-time multi-step prediction. LSTM networks are effective in predicting water quality due to their ability to handle long-term dependencies in sequential data. We propose a novel hybrid approach for water quality parameters prediction combining DL with data smoothing method. The Sava river at the Jamena hydrological station serves as a case study. Our workflow uses LSTM networks alongside LOcally WEighted Scatterplot Smoothing (LOWESS) technique for data filtering. For comparison, Support Vector Regressor (SVR) is used as the baseline method. Performance is evaluated using Root Mean Squared Error (RMSE) and Coefficient of Determination R2 metrics. Results demonstrate that LSTM outperforms the baseline method, with an R2 score up to 0.9998 and RMSE of 0.0230 on the test set for dissolved oxygen. Over a 5-day prediction period, our approach achieves R2 score of 0.9912 and RMSE of 0.1610 confirming it as a reliable method for water quality parameters prediction several days ahead.
{"title":"A novel machine learning-based framework for the water quality parameters prediction using hybrid long short-term memory and locally weighted scatterplot smoothing methods","authors":"Ana Dodig, Elisa Ricci, Goran Kvaščev, Milan Stojković","doi":"10.2166/hydro.2024.273","DOIUrl":"https://doi.org/10.2166/hydro.2024.273","url":null,"abstract":"\u0000 \u0000 Water quality prediction is crucial for effective river stream management. Dissolved oxygen, conductivity and chemical oxygen demand are vital chemical parameters for water quality. Development of machine learning (ML) and deep learning (DL) methods made them widely used in this domain. Sophisticated DL techniques, especially long short-term memory (LSTM) networks, are required for accurate, real-time multi-step prediction. LSTM networks are effective in predicting water quality due to their ability to handle long-term dependencies in sequential data. We propose a novel hybrid approach for water quality parameters prediction combining DL with data smoothing method. The Sava river at the Jamena hydrological station serves as a case study. Our workflow uses LSTM networks alongside LOcally WEighted Scatterplot Smoothing (LOWESS) technique for data filtering. For comparison, Support Vector Regressor (SVR) is used as the baseline method. Performance is evaluated using Root Mean Squared Error (RMSE) and Coefficient of Determination R2 metrics. Results demonstrate that LSTM outperforms the baseline method, with an R2 score up to 0.9998 and RMSE of 0.0230 on the test set for dissolved oxygen. Over a 5-day prediction period, our approach achieves R2 score of 0.9912 and RMSE of 0.1610 confirming it as a reliable method for water quality parameters prediction several days ahead.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140710576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Janni Mosekær Nielsen, M. R. Rasmussen, S. Thorndahl, M. Ahm, Jesper Ellerbæk Nielsen
Predicting the response to rainfall in urban hydrological applications requires accurate precipitation estimates with a high spatiotemporal resolution to reflect the natural variability of rainfall. However, installing rain gauges under nearly ideal measurement conditions is often difficult in urban areas, if not impossible. This paper demonstrates the potential of deriving rainfall measurements in urban areas and bias-adjusting weather radar rainfall measurements using stormwater runoff measurements. As a supplement to point rainfall measurements from rain gauges, the developed bias adjustment approach uses catchment runoff-rainfall estimates derived from water level measurements of a stormwater detention pond. The study shows that the bias-adjusted radar product correlates highly with rain gauge measurements in the catchment. Moreover, the presented approach enables rainfall measurements within a catchment independent of rain gauges located in the catchment, making the technique highly applicable for increasing the density of ground observations and thus improving weather radar precipitation estimates over urban areas. The method also derives the catchment-specific runoff coefficient independently of expensive flow measurements in the catchment, making the method very scalable. This paper highlights the potential of using easily achievable catchment runoff-rainfall measurements to increase the density of available ground observations and thereby improve weather radar precipitation estimates.
{"title":"Can stormwater runoff measurements be used for weather radar rainfall adjustment?","authors":"Janni Mosekær Nielsen, M. R. Rasmussen, S. Thorndahl, M. Ahm, Jesper Ellerbæk Nielsen","doi":"10.2166/hydro.2024.172","DOIUrl":"https://doi.org/10.2166/hydro.2024.172","url":null,"abstract":"\u0000 \u0000 Predicting the response to rainfall in urban hydrological applications requires accurate precipitation estimates with a high spatiotemporal resolution to reflect the natural variability of rainfall. However, installing rain gauges under nearly ideal measurement conditions is often difficult in urban areas, if not impossible. This paper demonstrates the potential of deriving rainfall measurements in urban areas and bias-adjusting weather radar rainfall measurements using stormwater runoff measurements. As a supplement to point rainfall measurements from rain gauges, the developed bias adjustment approach uses catchment runoff-rainfall estimates derived from water level measurements of a stormwater detention pond. The study shows that the bias-adjusted radar product correlates highly with rain gauge measurements in the catchment. Moreover, the presented approach enables rainfall measurements within a catchment independent of rain gauges located in the catchment, making the technique highly applicable for increasing the density of ground observations and thus improving weather radar precipitation estimates over urban areas. The method also derives the catchment-specific runoff coefficient independently of expensive flow measurements in the catchment, making the method very scalable. This paper highlights the potential of using easily achievable catchment runoff-rainfall measurements to increase the density of available ground observations and thereby improve weather radar precipitation estimates.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140710039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The uneven velocity distribution formed at the lateral inlet/outlet poses a significant risk of damaging the trash racks. Reasonable design of the inlet/outlet structures requires the consideration of two major aspects: the average velocity (Vm) and the coefficient of unevenness (Uc). This paper developed an optimization framework that combines an interpretable Gradient Boosting Decision Tree (SOBOL-GBDT) with a Non-dominated Sorting Genetic Algorithm (NSGA-II). 125 conditions are simulated by performing CFD simulations to generate the dataset, followed by GBDT implemented to establish a nonlinear mapping between the input parameters including vertical (α) and horizontal (β) diffusion angles, diffusion segment length (LD), channel area (CA), and the objectives Uc and Vm. The SOBOL analysis reveals that in Uc prediction, CA and α play more significant roles in the model development compared to β and LD. Besides, GBDT is observed to better capture interactive effects of the input parameters compared with other machine learning models. Subsequently, a multi-objective optimization framework using GBDT-NSGA-II is developed. The framework calculates the optimal Pareto front and determines the best solution using a pseudo-weight method. The results demonstrate that this framework leads to significant improvements in flow separation reduction in the diffusion segment and the normalized velocity distribution. The SOBOL-GBDT-NSGA-II framework facilitates a rational and effective design of the inlet/outlet.
{"title":"Interpretable GBDT model-based multi-objective optimization analysis for the lateral inlet/outlet design in pumped-storage power stations","authors":"G. Guo, Liu Yakun, Cao Ze, Di Zhang, Xiukui Zhao","doi":"10.2166/hydro.2024.304","DOIUrl":"https://doi.org/10.2166/hydro.2024.304","url":null,"abstract":"\u0000 \u0000 The uneven velocity distribution formed at the lateral inlet/outlet poses a significant risk of damaging the trash racks. Reasonable design of the inlet/outlet structures requires the consideration of two major aspects: the average velocity (Vm) and the coefficient of unevenness (Uc). This paper developed an optimization framework that combines an interpretable Gradient Boosting Decision Tree (SOBOL-GBDT) with a Non-dominated Sorting Genetic Algorithm (NSGA-II). 125 conditions are simulated by performing CFD simulations to generate the dataset, followed by GBDT implemented to establish a nonlinear mapping between the input parameters including vertical (α) and horizontal (β) diffusion angles, diffusion segment length (LD), channel area (CA), and the objectives Uc and Vm. The SOBOL analysis reveals that in Uc prediction, CA and α play more significant roles in the model development compared to β and LD. Besides, GBDT is observed to better capture interactive effects of the input parameters compared with other machine learning models. Subsequently, a multi-objective optimization framework using GBDT-NSGA-II is developed. The framework calculates the optimal Pareto front and determines the best solution using a pseudo-weight method. The results demonstrate that this framework leads to significant improvements in flow separation reduction in the diffusion segment and the normalized velocity distribution. The SOBOL-GBDT-NSGA-II framework facilitates a rational and effective design of the inlet/outlet.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140709385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haijia Zhang, Jiahong Liu, Chao Mei, Lirong Dong, Tianxu Song
Green computing is the current research hotspot; adapting the professional computing model to the low-power ARM architecture processor is a research trend. This article constructs a hydrological–hydrodynamic model by choosing SWMM and TELEMAC-2D as submodules, taking stormwater grates and inspection wells as water flow exchange nodes. To make the coupling model run on the ARM architecture processor, the container technology is selected for model adaptation, and the adapted coupling model is named the ARM hydrological–hydrodynamic model (AHM). To verify the reasonableness and accuracy of the model, taking the inner harbor of Macao Peninsula as the study area, a numerical simulation study on the evolution of waterlogging was carried out in various scenarios. The analysis showed that the maximum flow velocity was concentrated in the streets with low topography in the city center, and the areas of standing water were distributed in a point-like manner. Risk distribution maps during different storm recurrence periods were also constructed. Finally, the direct economic losses caused by Typhoon Mangkhut were calculated based on the model results and compared with the statistical values, with an error of only 2.3%, and the direct flooding losses in the inner harbor area under different rainfall scenarios were derived accordingly.
绿色计算是当前的研究热点,将专业计算模型应用于低功耗 ARM 架构处理器是一种研究趋势。本文选择 SWMM 和 TELEMAC-2D 作为子模块,以雨水篦子和检查井作为水流交换节点,构建了水文-水动力模型。为使耦合模型在 ARM 架构处理器上运行,选择了容器技术进行模型适配,并将适配后的耦合模型命名为 ARM 水文-水动力模型(AHM)。为验证模型的合理性和准确性,以澳门半岛内港为研究区域,对各种情况下的内涝演变过程进行了数值模拟研究。分析结果显示,最大流速集中在市中心地势较低的街道,积水区域呈点状分布。此外,还绘制了不同暴雨重现期的风险分布图。最后,根据模型结果计算了台风 "山竹 "造成的直接经济损失,并与统计值进行了比较,误差仅为 2.3%,据此得出了不同降雨情景下内港地区的直接洪涝损失。
{"title":"Construction of hydrological and hydrodynamic models based on ARM architecture processor – A case study of inner Harbor area of Macao Peninsula","authors":"Haijia Zhang, Jiahong Liu, Chao Mei, Lirong Dong, Tianxu Song","doi":"10.2166/hydro.2024.248","DOIUrl":"https://doi.org/10.2166/hydro.2024.248","url":null,"abstract":"\u0000 Green computing is the current research hotspot; adapting the professional computing model to the low-power ARM architecture processor is a research trend. This article constructs a hydrological–hydrodynamic model by choosing SWMM and TELEMAC-2D as submodules, taking stormwater grates and inspection wells as water flow exchange nodes. To make the coupling model run on the ARM architecture processor, the container technology is selected for model adaptation, and the adapted coupling model is named the ARM hydrological–hydrodynamic model (AHM). To verify the reasonableness and accuracy of the model, taking the inner harbor of Macao Peninsula as the study area, a numerical simulation study on the evolution of waterlogging was carried out in various scenarios. The analysis showed that the maximum flow velocity was concentrated in the streets with low topography in the city center, and the areas of standing water were distributed in a point-like manner. Risk distribution maps during different storm recurrence periods were also constructed. Finally, the direct economic losses caused by Typhoon Mangkhut were calculated based on the model results and compared with the statistical values, with an error of only 2.3%, and the direct flooding losses in the inner harbor area under different rainfall scenarios were derived accordingly.","PeriodicalId":54801,"journal":{"name":"Journal of Hydroinformatics","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140708694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}