This paper proposes an innovative cellular automata model based on the Harris Hawk Optimization (HHO) algorithm. HHO is an intelligent optimization algorithm inspired by the cooperative hunting behavior of Harris’s hawks, demonstrating excellent optimization efficiency in spatial searches. Combining the HHO algorithm with the CA model, we establish the HHO-CA model for simulating urban growth in Guangzhou, China. The simulation achieves a total accuracy of 91.95%, an accuracy of urban cells of 82.43%, and a Kappa coefficient of 0.7441, all superior to the Null model. Furthermore, comparing the HHO-CA model with other representative CA models, the HHO-CA model outperforms in total accuracy, accuracy of urban cells, and Kappa coefficient, showcasing significant advantages in using the HHO algorithm to mine transition rules during the simulation of urban growth processes.
本文提出了一种基于哈里斯鹰优化(HHO)算法的创新蜂窝自动机模型。HHO 是一种智能优化算法,其灵感来源于哈里斯鹰的合作狩猎行为,在空间搜索中表现出卓越的优化效率。结合 HHO 算法和 CA 模型,我们建立了 HHO-CA 模型,用于模拟中国广州的城市发展。模拟的总精度达到 91.95%,城市单元精度达到 82.43%,Kappa 系数达到 0.7441,均优于 Null 模型。此外,将 HHO-CA 模型与其他具有代表性的 CA 模型进行比较,HHO-CA 模型在总精度、城市单元精度和 Kappa 系数方面均优于其他 CA 模型,显示了在模拟城市增长过程中使用 HHO 算法挖掘过渡规则的显著优势。
{"title":"A Harris Hawks optimization-based cellular automata model for urban growth simulation","authors":"Yuan Ding, Hengyi Zheng, Fuming Jin, Dongming Chen, Xinyu Huang","doi":"10.1007/s12145-024-01399-z","DOIUrl":"https://doi.org/10.1007/s12145-024-01399-z","url":null,"abstract":"<p>This paper proposes an innovative cellular automata model based on the Harris Hawk Optimization (HHO) algorithm. HHO is an intelligent optimization algorithm inspired by the cooperative hunting behavior of Harris’s hawks, demonstrating excellent optimization efficiency in spatial searches. Combining the HHO algorithm with the CA model, we establish the HHO-CA model for simulating urban growth in Guangzhou, China. The simulation achieves a total accuracy of 91.95%, an accuracy of urban cells of 82.43%, and a Kappa coefficient of 0.7441, all superior to the Null model. Furthermore, comparing the HHO-CA model with other representative CA models, the HHO-CA model outperforms in total accuracy, accuracy of urban cells, and Kappa coefficient, showcasing significant advantages in using the HHO algorithm to mine transition rules during the simulation of urban growth processes.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"77 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1007/s12145-024-01400-9
Gang Yang, Min Zeng, Xiaohong Lin, Songbai Li, Haoxiang Yang, Lingyan Shen
Different geographical locations have different time series and types of earthquake early warning data of hydropower stations, and the packet loss rate in data sharing is high. In this regard, a real-time sharing algorithm of earthquake early warning data of hydropower stations based on deep learning is proposed. The compressed sensing method is used to collect the seismic data of the hydropower station, and the dictionary learning algorithm based on ordered parallel atomic updating is introduced to improve the compressed sensing process and to sparse the seismic data of the hydropower station. Combining FCOS and DNN, the seismic velocity spectrum is picked up from the collected seismic data and used as the input of the convolutional neural network. The real-time sharing of earthquake early warning data is realized using the CDMA1x network and TCP data transmission protocol. Experiments show that the algorithm can accurately pick up the regional seismic velocity spectrum of hydropower stations, the packet loss rate of earthquake early warning data transmission is low, and the sharing results contain a variety of information, which can provide a variety of data for people who need information and has strong practicability.
{"title":"Real-time sharing algorithm of earthquake early warning data of hydropower station based on deep learning","authors":"Gang Yang, Min Zeng, Xiaohong Lin, Songbai Li, Haoxiang Yang, Lingyan Shen","doi":"10.1007/s12145-024-01400-9","DOIUrl":"https://doi.org/10.1007/s12145-024-01400-9","url":null,"abstract":"<p>Different geographical locations have different time series and types of earthquake early warning data of hydropower stations, and the packet loss rate in data sharing is high. In this regard, a real-time sharing algorithm of earthquake early warning data of hydropower stations based on deep learning is proposed. The compressed sensing method is used to collect the seismic data of the hydropower station, and the dictionary learning algorithm based on ordered parallel atomic updating is introduced to improve the compressed sensing process and to sparse the seismic data of the hydropower station. Combining FCOS and DNN, the seismic velocity spectrum is picked up from the collected seismic data and used as the input of the convolutional neural network. The real-time sharing of earthquake early warning data is realized using the CDMA1x network and TCP data transmission protocol. Experiments show that the algorithm can accurately pick up the regional seismic velocity spectrum of hydropower stations, the packet loss rate of earthquake early warning data transmission is low, and the sharing results contain a variety of information, which can provide a variety of data for people who need information and has strong practicability.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"8 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1007/s12145-024-01396-2
Chinmayee Chaini, Vijay Kumar Jha
The lunar surface, which has been extensively explored and studied, offers valuable insights into its geological history and crater distribution due to the abundance of impact craters on its surface. Detecting numerous craters of different sizes on the lunar surface necessitated an automated process to avoid manual intervention, which consumed significant time and effort. However, traditional methods rely on manual feature extraction methods, encountering similar challenges, including low performance, particularly when confronted with diverse crater sizes and illumination conditions. In recent years, intelligent algorithms that introduce automated crater detection algorithms (CDAs) using deep learning (DL) techniques have played a vital role in detecting various sizes of craters on the lunar surface that may be missed or miss-classification by visual interpretation. This study outlines the challenges faced by traditional methods and explores recent advancements in DL techniques. The main objective is to provide a comprehensive review of prior studies, highlighting the advantages and limitations of each DL-based technique for automatic crater detection. Additionally, this study aggregates existing research on various image-processing tasks (such as semantic segmentation, classification-based, and object detection) utilizing DL-based techniques for detecting various sizes of craters on the lunar surface. Further, this study provides a comprehensive analysis of both manually and automatically compiled crater databases to assist new researchers in validating their models both qualitatively and quantitatively. By reviewing existing literature, this study aids new researchers in understanding the limitations and key findings of recent research, thereby promoting progress toward greater automation in crater detection.
{"title":"A review on deep learning-based automated lunar crater detection","authors":"Chinmayee Chaini, Vijay Kumar Jha","doi":"10.1007/s12145-024-01396-2","DOIUrl":"https://doi.org/10.1007/s12145-024-01396-2","url":null,"abstract":"<p>The lunar surface, which has been extensively explored and studied, offers valuable insights into its geological history and crater distribution due to the abundance of impact craters on its surface. Detecting numerous craters of different sizes on the lunar surface necessitated an automated process to avoid manual intervention, which consumed significant time and effort. However, traditional methods rely on manual feature extraction methods, encountering similar challenges, including low performance, particularly when confronted with diverse crater sizes and illumination conditions. In recent years, intelligent algorithms that introduce automated crater detection algorithms (CDAs) using deep learning (DL) techniques have played a vital role in detecting various sizes of craters on the lunar surface that may be missed or miss-classification by visual interpretation. This study outlines the challenges faced by traditional methods and explores recent advancements in DL techniques. The main objective is to provide a comprehensive review of prior studies, highlighting the advantages and limitations of each DL-based technique for automatic crater detection. Additionally, this study aggregates existing research on various image-processing tasks (such as semantic segmentation, classification-based, and object detection) utilizing DL-based techniques for detecting various sizes of craters on the lunar surface. Further, this study provides a comprehensive analysis of both manually and automatically compiled crater databases to assist new researchers in validating their models both qualitatively and quantitatively. By reviewing existing literature, this study aids new researchers in understanding the limitations and key findings of recent research, thereby promoting progress toward greater automation in crater detection.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"42 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1007/s12145-024-01402-7
Bhargav Parulekar, Nischal Singh, Anandakumar M. Ramiya
With the present trend toward digitization in many areas of urban planning and development, accurate object classification is becoming increasingly vital. To develop machine learning models that can effectively classify the broader region, it is crucial to have accurately labelled datasets for object extraction. However, the process of generating sufficient labelled data for machine learning models remains challenging. A recently developed AI-assisted segmentation approach called the Segment Anything Model (SAM) offers a solution to enhance the labelling of complex and intricate image structures. By utilizing SAM, the accuracy and consistency of annotation results can be improved, while also significantly reducing the time required for annotation. This paper aims to assess the efficiency of SAM annotated labels for training machine learning models using high-resolution remote sensing data captured by UAVs (Unmanned Aerial Vehicles) in the peri-urban region of Anad, Kerala, India. A comparative analysis was conducted to evaluate the performance of training datasets generated using SAM and manual labelling with existing tools. Multiple machine learning models, including Random Forest, Support Vector Machine, and XGBoost, were employed for this analysis. The findings demonstrate that employing the XGBoost algorithm in combination with SAM annotated labels yielded an accuracy of 78%. In contrast, the same algorithm trained with the manually labeled dataset achieved an accuracy of only 68%. A similar pattern was observed when employing the Random Forest algorithm, with accuracies of 78% and 60% while using SAM annotated labels and manual labels, respectively. These outcomes unequivocally showcase the enhanced effectiveness and dependability of the SAM-based segmentation method in producing accurate results.
随着当前许多城市规划和发展领域的数字化趋势,准确的物体分类变得越来越重要。要开发能对更广泛区域进行有效分类的机器学习模型,关键是要有准确标注的数据集来提取对象。然而,为机器学习模型生成足够的标注数据的过程仍然充满挑战。最近开发的人工智能辅助分割方法--"任意分割模型"(SAM)提供了一种解决方案,可以增强对复杂和错综复杂的图像结构的标注。通过使用 SAM,可以提高标注结果的准确性和一致性,同时还能大大减少标注所需的时间。本文旨在利用无人机(UAV)在印度喀拉拉邦阿纳德近郊地区捕获的高分辨率遥感数据,评估 SAM 注释标签在训练机器学习模型方面的效率。我们进行了一项比较分析,以评估使用 SAM 生成的训练数据集和使用现有工具手动标记的训练数据集的性能。分析中使用了多种机器学习模型,包括随机森林、支持向量机和 XGBoost。研究结果表明,将 XGBoost 算法与 SAM 标注相结合,准确率达到 78%。相比之下,使用人工标注数据集训练的同一算法的准确率仅为 68%。在使用随机森林算法时也观察到了类似的模式,在使用 SAM 注释标签和人工标签时,准确率分别为 78% 和 60%。这些结果清楚地表明,基于 SAM 的分割方法在产生准确结果方面具有更高的有效性和可靠性。
{"title":"Evaluation of segment anything model (SAM) for automated labelling in machine learning classification of UAV geospatial data","authors":"Bhargav Parulekar, Nischal Singh, Anandakumar M. Ramiya","doi":"10.1007/s12145-024-01402-7","DOIUrl":"https://doi.org/10.1007/s12145-024-01402-7","url":null,"abstract":"<p>With the present trend toward digitization in many areas of urban planning and development, accurate object classification is becoming increasingly vital. To develop machine learning models that can effectively classify the broader region, it is crucial to have accurately labelled datasets for object extraction. However, the process of generating sufficient labelled data for machine learning models remains challenging. A recently developed AI-assisted segmentation approach called the Segment Anything Model (SAM) offers a solution to enhance the labelling of complex and intricate image structures. By utilizing SAM, the accuracy and consistency of annotation results can be improved, while also significantly reducing the time required for annotation. This paper aims to assess the efficiency of SAM annotated labels for training machine learning models using high-resolution remote sensing data captured by UAVs (Unmanned Aerial Vehicles) in the peri-urban region of Anad, Kerala, India. A comparative analysis was conducted to evaluate the performance of training datasets generated using SAM and manual labelling with existing tools. Multiple machine learning models, including Random Forest, Support Vector Machine, and XGBoost, were employed for this analysis. The findings demonstrate that employing the XGBoost algorithm in combination with SAM annotated labels yielded an accuracy of 78%. In contrast, the same algorithm trained with the manually labeled dataset achieved an accuracy of only 68%. A similar pattern was observed when employing the Random Forest algorithm, with accuracies of 78% and 60% while using SAM annotated labels and manual labels, respectively. These outcomes unequivocally showcase the enhanced effectiveness and dependability of the SAM-based segmentation method in producing accurate results.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"42 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1007/s12145-024-01392-6
Na Liu, Yan Sun, Jiabao Wang, Zhe Wang, Ahmad Rastegarnia, Jafar Qajar
The elastic modulus is one of the important parameters for analyzing the stability of engineering projects, especially dam sites. In the current study, the effect of physical properties, quartz, fragment, and feldspar percentages, and dynamic Young’s modulus (DYM) on the static Young’s modulus (SYM) of the various types of sandstones was assessed. These investigations were conducted through simple and multivariate regression, support vector regression, adaptive neuro-fuzzy inference system, and backpropagation multilayer perceptron. The XRD and thin section results showed that the studied samples were classified as arenite, litharenite, and feldspathic litharenite. The low resistance of the arenite type is mainly due to the presence of sulfate cement, clay minerals, high porosity, and carbonate fragments in this type. Examining the fracture patterns of these sandstones in different resistance ranges showed that at low values of resistance, the fracture pattern is mainly of simple shear type, which changes to multiple extension types with increasing compressive strength. Among the influencing factors, the percentage of quartz has the greatest effect on SYM. A comparison of the methods' performance based on CPM and error values in estimating SYM revealed that SVR (R2 = 0.98, RMSE = 0.11GPa, CPM = + 1.84) outperformed other methods in terms of accuracy. The average difference between predicted SYM using intelligent methods and measured SYM value was less than 0.05% which indicates the efficiency of the used methods in estimating SYM.
{"title":"Estimation of static Young’s modulus of sandstone types: effective machine learning and statistical models","authors":"Na Liu, Yan Sun, Jiabao Wang, Zhe Wang, Ahmad Rastegarnia, Jafar Qajar","doi":"10.1007/s12145-024-01392-6","DOIUrl":"https://doi.org/10.1007/s12145-024-01392-6","url":null,"abstract":"<p>The elastic modulus is one of the important parameters for analyzing the stability of engineering projects, especially dam sites. In the current study, the effect of physical properties, quartz, fragment, and feldspar percentages, and dynamic Young’s modulus (DYM) on the static Young’s modulus (SYM) of the various types of sandstones was assessed. These investigations were conducted through simple and multivariate regression, support vector regression, adaptive neuro-fuzzy inference system, and backpropagation multilayer perceptron. The XRD and thin section results showed that the studied samples were classified as arenite, litharenite, and feldspathic litharenite. The low resistance of the arenite type is mainly due to the presence of sulfate cement, clay minerals, high porosity, and carbonate fragments in this type. Examining the fracture patterns of these sandstones in different resistance ranges showed that at low values of resistance, the fracture pattern is mainly of simple shear type, which changes to multiple extension types with increasing compressive strength. Among the influencing factors, the percentage of quartz has the greatest effect on SYM. A comparison of the methods' performance based on CPM and error values in estimating SYM revealed that SVR (R<sup>2</sup> = 0.98, RMSE = 0.11GPa, CPM = + 1.84) outperformed other methods in terms of accuracy. The average difference between predicted SYM using intelligent methods and measured SYM value was less than 0.05% which indicates the efficiency of the used methods in estimating SYM.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"136 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s12145-024-01393-5
Antonella S. Antonini, Leandro Luque, Gabriela R. Ferracutti, Ernesto A. Bjerg, Silvia M. Castro, María Luján Ganuza
Spinel group minerals, found within various rock types, exhibit distinct categorizations based on their host rocks. According to Barnes and Roeder (2001), these minerals can be classified into eight primary groups, each further subdivided into variable numbers of subgroups that can be related to a particular tectonic setting. This classification is based on the cations corresponding to the end-members of the spinel prism and is traditionally analyzed in this prismatic space or using projections of it. In this prismatic representation, several categories tend to overlap, making it impossible to determine which is the tectonic environment in that scenario. An alternative to solve this problem is to generate representations of these groups considering more attributes, making the most of the many values measured during the geochemical analysis. In this paper, we present SpinelVA, a visual exploration tool that integrates Machine Learning techniques and allows the identification of groups using the cations considered by Barnes and Roeder and some additional ones obtained from chemical analysis. SpinelVA allows us to know the tectonic environment of unknown samples by categorizing them according to the Barnes and Roeder classification. Additionally, SpinelVA integrates a collection of visual analysis techniques alongside the already used spinel prism projections and provides a set of interactions that assist geologists in the exploration process. Users can perform a complete data analysis by combining the proposed techniques and associated interactions.
{"title":"SpinelVA. A new perspective for the visual analysis and classification of spinel group minerals","authors":"Antonella S. Antonini, Leandro Luque, Gabriela R. Ferracutti, Ernesto A. Bjerg, Silvia M. Castro, María Luján Ganuza","doi":"10.1007/s12145-024-01393-5","DOIUrl":"https://doi.org/10.1007/s12145-024-01393-5","url":null,"abstract":"<p>Spinel group minerals, found within various rock types, exhibit distinct categorizations based on their host rocks. According to Barnes and Roeder (2001), these minerals can be classified into eight primary groups, each further subdivided into variable numbers of subgroups that can be related to a particular tectonic setting. This classification is based on the cations corresponding to the end-members of the spinel prism and is traditionally analyzed in this prismatic space or using projections of it. In this prismatic representation, several categories tend to overlap, making it impossible to determine which is the tectonic environment in that scenario. An alternative to solve this problem is to generate representations of these groups considering more attributes, making the most of the many values measured during the geochemical analysis. In this paper, we present <i>SpinelVA</i>, a visual exploration tool that integrates Machine Learning techniques and allows the identification of groups using the cations considered by Barnes and Roeder and some additional ones obtained from chemical analysis. <i>SpinelVA</i> allows us to know the tectonic environment of unknown samples by categorizing them according to the Barnes and Roeder classification. Additionally, <i>SpinelVA</i> integrates a collection of visual analysis techniques alongside the already used spinel prism projections and provides a set of interactions that assist geologists in the exploration process. Users can perform a complete data analysis by combining the proposed techniques and associated interactions.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"82 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s12145-024-01397-1
Abhilash Gogineni, Madhusudana Rao Chintalacheruvu, Ravindra Vitthal Kale
Modelling streamflow in snow-covered mountainous regions with complex hydrology and topography poses a significant challenge, particularly given the pronounced influence of temperature lapse rate (TLAPS) and precipitation lapse rate (PLAPS). The Present study area covers 54,990 km2 in the western Himalayas, including the Tibetan Plateau and the Indian portion of the USRB up to Bhakra Dam in Himachal Pradesh. In order to estimate the snowmelt and rainfall runoff contributions to the catchment, an integrated Soil and Water Assessment Tool (SWAT) model incorporates a Temperature Index with an Elevation Band approach. The uncertainty analysis of the SWAT model has been conducted using the Sequential Uncertainty Fitting algorithm (SUFI-2). Furthermore, machine-learning models such as Long Short-Term Memory (LSTM) neural networks and Random Forest (RF) are integrated with the SWAT model to enhance the accuracy of streamflow predictions resulting from snowmelt. The performance indices of a model for the monthly calibration period are R2 = 0.83, NSE = 0.82, P-BIAS = 2.3, P-factor = 0.82, and R-factor = 0.81. The corresponding values for the validation period are R^2 = 0.78, NSE = 0.77, P-BIAS = 5.7, P-factor = 0.72 and R-factor = 0.66. The results show that 63.08% of the Bhakra gauging station’s annual streamflow has attributed to snow and glacier melt. The highest snow and glacier melt occur from May to August, while the minimum is observed from November to February. Regarding snowmelt forecasting, the LSTM model outperforms the RF model with an R2 value of 0.86 and 0.85 during training and testing, respectively. Additionally, sensitivity analysis highlights that soil and groundwater flow parameters, specifically SOL_K, SOL_AWC, and GWQMN, are the most sensitive parameters for streamflow modelling. The study confirms the effectiveness of SWAT for water resource planning and management in the mountainous USRB.
{"title":"Modelling of snow and glacier melt dynamics in a mountainous river basin using integrated SWAT and machine learning approaches","authors":"Abhilash Gogineni, Madhusudana Rao Chintalacheruvu, Ravindra Vitthal Kale","doi":"10.1007/s12145-024-01397-1","DOIUrl":"https://doi.org/10.1007/s12145-024-01397-1","url":null,"abstract":"<p>Modelling streamflow in snow-covered mountainous regions with complex hydrology and topography poses a significant challenge, particularly given the pronounced influence of temperature lapse rate (TLAPS) and precipitation lapse rate (PLAPS). The Present study area covers 54,990 km2 in the western Himalayas, including the Tibetan Plateau and the Indian portion of the USRB up to Bhakra Dam in Himachal Pradesh. In order to estimate the snowmelt and rainfall runoff contributions to the catchment, an integrated Soil and Water Assessment Tool (SWAT) model incorporates a Temperature Index with an Elevation Band approach. The uncertainty analysis of the SWAT model has been conducted using the Sequential Uncertainty Fitting algorithm (SUFI-2). Furthermore, machine-learning models such as Long Short-Term Memory (LSTM) neural networks and Random Forest (RF) are integrated with the SWAT model to enhance the accuracy of streamflow predictions resulting from snowmelt. The performance indices of a model for the monthly calibration period are R2 = 0.83, NSE = 0.82, P-BIAS = 2.3, P-factor = 0.82, and R-factor = 0.81. The corresponding values for the validation period are R^2 = 0.78, NSE = 0.77, P-BIAS = 5.7, P-factor = 0.72 and R-factor = 0.66. The results show that 63.08% of the Bhakra gauging station’s annual streamflow has attributed to snow and glacier melt. The highest snow and glacier melt occur from May to August, while the minimum is observed from November to February. Regarding snowmelt forecasting, the LSTM model outperforms the RF model with an R<sup>2</sup> value of 0.86 and 0.85 during training and testing, respectively. Additionally, sensitivity analysis highlights that soil and groundwater flow parameters, specifically SOL_K, SOL_AWC, and GWQMN, are the most sensitive parameters for streamflow modelling. The study confirms the effectiveness of SWAT for water resource planning and management in the mountainous USRB.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"124 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve data usage in an interdisciplinary context, a clear understanding of the variables being measured is required for both humans and machines. In this paper, the I-ADOPT framework, which decomposes variable names into atomic elements, was tested within the context of continental surfaces and critical zone science, characterized by a large number and variety of observed environmental variables. We showed that the I-ADOPT framework can be used effectively to describe environmental variables with precision and that it was flexible enough to be used in the critical zone science context. Variable names can be documented in detail while allowing alignment with other ontologies or thesauri. We have identified difficulties in modeling complex variables, such as those monitoring fluxes between different environmental compartments and for variables monitoring ratios of physical quantities. We also showed that, for some variables, different decompositions were possible, which could make alignments with other ontologies and thesauri more difficult. The precision of variable names proved inadequate for data discovery services and a non-standard label (SimplifiedLabel) had to be defined for this purpose. In the context of open science and interdisciplinary research, the I-ADOPT framework has the potential to improve the interoperability of information systems and the use of data from various sources and disciplines.
{"title":"Implementing a new Research Data Alliance recommendation, the I-ADOPT framework, for the naming of environmental variables of continental surfaces","authors":"Coussot Charly, Braud Isabelle, Chaffard Véronique, Boudevillain Brice, Sylvie Galle","doi":"10.1007/s12145-024-01373-9","DOIUrl":"https://doi.org/10.1007/s12145-024-01373-9","url":null,"abstract":"<p>To improve data usage in an interdisciplinary context, a clear understanding of the variables being measured is required for both humans and machines. In this paper, the I-ADOPT framework, which decomposes variable names into atomic elements, was tested within the context of continental surfaces and critical zone science, characterized by a large number and variety of observed environmental variables. We showed that the I-ADOPT framework can be used effectively to describe environmental variables with precision and that it was flexible enough to be used in the critical zone science context. Variable names can be documented in detail while allowing alignment with other ontologies or thesauri. We have identified difficulties in modeling complex variables, such as those monitoring fluxes between different environmental compartments and for variables monitoring ratios of physical quantities. We also showed that, for some variables, different decompositions were possible, which could make alignments with other ontologies and thesauri more difficult. The precision of variable names proved inadequate for data discovery services and a non-standard label (<i>SimplifiedLabel</i>) had to be defined for this purpose. In the context of open science and interdisciplinary research, the I-ADOPT framework has the potential to improve the interoperability of information systems and the use of data from various sources and disciplines.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"36 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s12145-024-01394-4
Minal Bodke, Sangita Chaudhari
Rapid advancement in satellite communication over the last decade have resulted in the widespread use of remote sensing images. Additionally, as satellite image transmission over the Internet has increased, secrecy concerns have also arisen. As a result, digitally transmitted images must have great imperceptibility and confidentiality. Multispectral images consist of multiple bands. It is very challenging to select the important spectral band for watermarking so that the structural and visual quality of the satellite Image can be retained. This work proposes an innovative blind watermarking model based on a hybrid optimization strategy performed with the following two processes: the embedding process and the extraction process. A novel hybrid optimization named FBIAO algorithm, which is the amalgamation of Archimedes Optimization (ArchOA) and Forensic Based Investigation Optimization (FBIO) algorithm is used to select spectral band for watermarking. The proposed novel FBIAO enhances the balances between the exploration and exploitation, boosts the solution diversity and improves the convergence of FBI based optimization for spectral band selection. The 3-level Discrete Wavelet Transform (DWT) is used to embed the watermark logo in the selected spectral band image and then position selection is applied to identify the location for embedding the watermark. Further, the watermark image is scrambled using Arnold Map technique to avoid the correlation between image pixel. The proposed method provides a peak signal-to-noise ratio (PSNR) in the range of 35.57 dB to 36.80 dB and, a structural similarity index (SSIM) between 0.91 to 0.93 without attack for six sample datasets. It provides robustness for different attacks and offers SSIM in between 0.6 to 0.87 and normalized Correlation (NC) in between 0.8 to 0.91 which is superior over traditional techniques.
{"title":"Hyperspectral remote sensing image watermarking using discrete wavelet transform and forensic based investigation archimedes optimization","authors":"Minal Bodke, Sangita Chaudhari","doi":"10.1007/s12145-024-01394-4","DOIUrl":"https://doi.org/10.1007/s12145-024-01394-4","url":null,"abstract":"<p>Rapid advancement in satellite communication over the last decade have resulted in the widespread use of remote sensing images. Additionally, as satellite image transmission over the Internet has increased, secrecy concerns have also arisen. As a result, digitally transmitted images must have great imperceptibility and confidentiality. Multispectral images consist of multiple bands. It is very challenging to select the important spectral band for watermarking so that the structural and visual quality of the satellite Image can be retained. This work proposes an innovative blind watermarking model based on a hybrid optimization strategy performed with the following two processes: the embedding process and the extraction process. A novel hybrid optimization named FBIAO algorithm, which is the amalgamation of Archimedes Optimization (ArchOA) and Forensic Based Investigation Optimization (FBIO) algorithm is used to select spectral band for watermarking. The proposed novel FBIAO enhances the balances between the exploration and exploitation, boosts the solution diversity and improves the convergence of FBI based optimization for spectral band selection. The 3-level Discrete Wavelet Transform (DWT) is used to embed the watermark logo in the selected spectral band image and then position selection is applied to identify the location for embedding the watermark. Further, the watermark image is scrambled using Arnold Map technique to avoid the correlation between image pixel. The proposed method provides a peak signal-to-noise ratio (PSNR) in the range of 35.57 dB to 36.80 dB and, a structural similarity index (SSIM) between 0.91 to 0.93 without attack for six sample datasets. It provides robustness for different attacks and offers SSIM in between 0.6 to 0.87 and normalized Correlation (NC) in between 0.8 to 0.91 which is superior over traditional techniques.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"2012 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s12145-024-01390-8
Tran Tuan Thach
This paper presents deep learning using LSTM, machine learning employing RF and GB algorithms, and the rating curve (RC) that can be used for estimating daily streamflow at the outlet of river basins. The Kone River basin in Vietnam is selected as an example for demonstrating the ability of these approaches. Hydro-meteorological data, including rainfall at Vinh Kim as well as water level and streamflow at Binh Tuong, were collected in the long period from 1/1/1979 to 31/12/2018. Multiple approaches mentioned above are implemented and applied for estimating daily streamflow at Binh Tuong in the Kone River basin. Firstly, coefficients and hyper-parameters in each approach are carefully determined using available hydro-meteorological data from 1/1/1979 to 31/12/2009 and dimensional and dimensionless error indexes. The results revealed that deep learning using LSTM presents the most suitable performance of the observed streamflow, with correlation coefficient r and NSE being close unity, while RMSE and MAE are less than 1.5% of the observed magnitude of streamflow. The RC and machine learning employing RF and GB algorithms procedures acceptably the observed streamflow, with r and NSE varying between 0.77 and 0.98, and RMSE and MAE ranging from 0.4 to 6.0% of the observed magnitude of streamflow. Secondly, multiple approaches are also applied for estimating daily streamflow from 1/1/2010 to 31/12/2018, revealing consistent statistical characteristics of streamflow in the river basin. Finally, the impacts of input data on output streamflow are discussed.
{"title":"Multiple data-driven approaches for estimating daily streamflow in the Kone River basin, Vietnam","authors":"Tran Tuan Thach","doi":"10.1007/s12145-024-01390-8","DOIUrl":"https://doi.org/10.1007/s12145-024-01390-8","url":null,"abstract":"<p>This paper presents deep learning using LSTM, machine learning employing RF and GB algorithms, and the rating curve (RC) that can be used for estimating daily streamflow at the outlet of river basins. The Kone River basin in Vietnam is selected as an example for demonstrating the ability of these approaches. Hydro-meteorological data, including rainfall at Vinh Kim as well as water level and streamflow at Binh Tuong, were collected in the long period from 1/1/1979 to 31/12/2018. Multiple approaches mentioned above are implemented and applied for estimating daily streamflow at Binh Tuong in the Kone River basin. Firstly, coefficients and hyper-parameters in each approach are carefully determined using available hydro-meteorological data from 1/1/1979 to 31/12/2009 and dimensional and dimensionless error indexes. The results revealed that deep learning using LSTM presents the most suitable performance of the observed streamflow, with correlation coefficient <i>r</i> and <i>NSE</i> being close unity, while <i>RMSE</i> and <i>MAE</i> are less than 1.5% of the observed magnitude of streamflow. The RC and machine learning employing RF and GB algorithms procedures acceptably the observed streamflow, with <i>r</i> and <i>NSE</i> varying between 0.77 and 0.98, and <i>RMSE</i> and <i>MAE</i> ranging from 0.4 to 6.0% of the observed magnitude of streamflow. Secondly, multiple approaches are also applied for estimating daily streamflow from 1/1/2010 to 31/12/2018, revealing consistent statistical characteristics of streamflow in the river basin. Finally, the impacts of input data on output streamflow are discussed.</p>","PeriodicalId":49318,"journal":{"name":"Earth Science Informatics","volume":"22 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}