Qi Ouyang, Hongchang Chen, Shuxin Liu, Liming Pu, Dongdong Ge, Ke Fan
Predicting propagation cascades is crucial for understanding information propagation in social networks. Existing methods always focus on structure or order of infected users in a single cascade sequence, ignoring the global dependencies of cascades and users, which is insufficient to characterize their dynamic interaction preferences. Moreover, existing methods are poor at addressing the problem of model robustness. To address these issues, we propose a predication model named DropMessage Hypergraph Attention Networks, which constructs a hypergraph based on the cascade sequence. Specifically, to dynamically obtain user preferences, we divide the diffusion hypergraph into multiple subgraphs according to the time stamps, develop hypergraph attention networks to explicitly learn complete interactions, and adopt a gated fusion strategy to connect them for user cascade prediction. In addition, a new drop immediately method DropMessage is added to increase the robustness of the model. Experimental results on three real-world datasets indicate that proposed model significantly outperforms the most advanced information propagation prediction model in both MAP@k and Hits@K metrics, and the experiment also proves that the model achieves more significant prediction performance than the existing model under data perturbation.
{"title":"DMHANT: DropMessage Hypergraph Attention Network for Information Propagation Prediction.","authors":"Qi Ouyang, Hongchang Chen, Shuxin Liu, Liming Pu, Dongdong Ge, Ke Fan","doi":"10.1089/big.2023.0131","DOIUrl":"https://doi.org/10.1089/big.2023.0131","url":null,"abstract":"<p><p>Predicting propagation cascades is crucial for understanding information propagation in social networks. Existing methods always focus on structure or order of infected users in a single cascade sequence, ignoring the global dependencies of cascades and users, which is insufficient to characterize their dynamic interaction preferences. Moreover, existing methods are poor at addressing the problem of model robustness. To address these issues, we propose a predication model named DropMessage Hypergraph Attention Networks, which constructs a hypergraph based on the cascade sequence. Specifically, to dynamically obtain user preferences, we divide the diffusion hypergraph into multiple subgraphs according to the time stamps, develop hypergraph attention networks to explicitly learn complete interactions, and adopt a gated fusion strategy to connect them for user cascade prediction. In addition, a new drop immediately method DropMessage is added to increase the robustness of the model. Experimental results on three real-world datasets indicate that proposed model significantly outperforms the most advanced information propagation prediction model in both MAP@k and Hits@K metrics, and the experiment also proves that the model achieves more significant prediction performance than the existing model under data perturbation.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The influence maximization problem has several issues, including low infection rates and high time complexity. Many proposed methods are not suitable for large-scale networks due to their time complexity or free parameter usage. To address these challenges, this article proposes a local heuristic called Embedding Technique for Influence Maximization (ETIM) that uses shell decomposition, graph embedding, and reduction, as well as combined local structural features. The algorithm selects candidate nodes based on their connections among network shells and topological features, reducing the search space and computational overhead. It uses a deep learning-based node embedding technique to create a multidimensional vector of candidate nodes and calculates the dependency on spreading for each node based on local topological features. Finally, influential nodes are identified using the results of the previous phases and newly defined local features. The proposed algorithm is evaluated using the independent cascade model, showing its competitiveness and ability to achieve the best performance in terms of solution quality. Compared with the collective influence global algorithm, ETIM is significantly faster and improves the infection rate by an average of 12%.
{"title":"Maximizing Influence in Social Networks Using Combined Local Features and Deep Learning-Based Node Embedding.","authors":"Asgarali Bouyer, Hamid Ahmadi Beni, Amin Golzari Oskouei, Alireza Rouhi, Bahman Arasteh, Xiaoyang Liu","doi":"10.1089/big.2023.0117","DOIUrl":"https://doi.org/10.1089/big.2023.0117","url":null,"abstract":"<p><p>The influence maximization problem has several issues, including low infection rates and high time complexity. Many proposed methods are not suitable for large-scale networks due to their time complexity or free parameter usage. To address these challenges, this article proposes a local heuristic called Embedding Technique for Influence Maximization (ETIM) that uses shell decomposition, graph embedding, and reduction, as well as combined local structural features. The algorithm selects candidate nodes based on their connections among network shells and topological features, reducing the search space and computational overhead. It uses a deep learning-based node embedding technique to create a multidimensional vector of candidate nodes and calculates the dependency on spreading for each node based on local topological features. Finally, influential nodes are identified using the results of the previous phases and newly defined local features. The proposed algorithm is evaluated using the independent cascade model, showing its competitiveness and ability to achieve the best performance in terms of solution quality. Compared with the collective influence global algorithm, ETIM is significantly faster and improves the infection rate by an average of 12%.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2021-12-13DOI: 10.1089/big.2021.0176
R Thenmozhi, S Shridevi, Sachi Nandan Mohanty, Vicente García-Díaz, Deepak Gupta, Prayag Tiwari, Mohammad Shorfuzzaman
There is a drastic increase in Internet usage across the globe, thanks to mobile phone penetration. This extreme Internet usage generates huge volumes of data, in other terms, big data. Security and privacy are the main issues to be considered in big data management. Hence, in this article, Attribute-based Adaptive Homomorphic Encryption (AAHE) is developed to enhance the security of big data. In the proposed methodology, Oppositional Based Black Widow Optimization (OBWO) is introduced to select the optimal key parameters by following the AAHE method. By considering oppositional function, Black Widow Optimization (BWO) convergence analysis was enhanced. The proposed methodology has different processes, namely, process setup, encryption, and decryption processes. The researcher evaluated the proposed methodology with non-abelian rings and the homomorphism process in ciphertext format. Further, it is also utilized in improving one-way security related to the conjugacy examination issue. Afterward, homomorphic encryption is developed to secure the big data. The study considered two types of big data such as adult datasets and anonymous Microsoft web datasets to validate the proposed methodology. With the help of performance metrics such as encryption time, decryption time, key size, processing time, downloading, and uploading time, the proposed method was evaluated and compared against conventional cryptography techniques such as Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC). Further, the key generation process was also compared against conventional methods such as BWO, Particle Swarm Optimization (PSO), and Firefly Algorithm (FA). The results established that the proposed method is supreme than the compared methods and can be applied in real time in near future.
{"title":"Attribute-Based Adaptive Homomorphic Encryption for Big Data Security.","authors":"R Thenmozhi, S Shridevi, Sachi Nandan Mohanty, Vicente García-Díaz, Deepak Gupta, Prayag Tiwari, Mohammad Shorfuzzaman","doi":"10.1089/big.2021.0176","DOIUrl":"10.1089/big.2021.0176","url":null,"abstract":"<p><p>There is a drastic increase in Internet usage across the globe, thanks to mobile phone penetration. This extreme Internet usage generates huge volumes of data, in other terms, big data. Security and privacy are the main issues to be considered in big data management. Hence, in this article, Attribute-based Adaptive Homomorphic Encryption (AAHE) is developed to enhance the security of big data. In the proposed methodology, Oppositional Based Black Widow Optimization (OBWO) is introduced to select the optimal key parameters by following the AAHE method. By considering oppositional function, Black Widow Optimization (BWO) convergence analysis was enhanced. The proposed methodology has different processes, namely, process setup, encryption, and decryption processes. The researcher evaluated the proposed methodology with non-abelian rings and the homomorphism process in ciphertext format. Further, it is also utilized in improving one-way security related to the conjugacy examination issue. Afterward, homomorphic encryption is developed to secure the big data. The study considered two types of big data such as adult datasets and anonymous Microsoft web datasets to validate the proposed methodology. With the help of performance metrics such as encryption time, decryption time, key size, processing time, downloading, and uploading time, the proposed method was evaluated and compared against conventional cryptography techniques such as Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC). Further, the key generation process was also compared against conventional methods such as BWO, Particle Swarm Optimization (PSO), and Firefly Algorithm (FA). The results established that the proposed method is supreme than the compared methods and can be applied in real time in near future.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"343-356"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39718084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2022-02-02DOI: 10.1089/big.2021.0251
Fei Dai, Pengfei Cao, Penggui Huang, Qi Mo, Bi Huang
Traffic speed prediction plays a fundamental role in traffic management and driving route planning. However, timely accurate traffic speed prediction is challenging as it is affected by complex spatial and temporal correlations. Most existing works cannot simultaneously model spatial and temporal correlations in traffic data, resulting in unsatisfactory prediction performance. In this article, we propose a novel hybrid deep learning approach, named HDL4TSP, to predict traffic speed in each region of a city, which consists of an input layer, a spatial layer, a temporal layer, a fusion layer, and an output layer. Specifically, first, the spatial layer employs graph convolutional networks to capture spatial near dependencies and spatial distant dependencies in the spatial dimension. Second, the temporal layer employs convolutional long short-term memory (ConvLSTM) networks to model closeness, daily periodicity, and weekly periodicity in the temporal dimension. Third, the fusion layer designs a fusion component to merge the outputs of ConvLSTM networks. Finally, we conduct extensive experiments and experimental results to show that HDL4TSP outperforms four baselines on two real-world data sets.
{"title":"Hybrid Deep Learning Approach for Traffic Speed Prediction.","authors":"Fei Dai, Pengfei Cao, Penggui Huang, Qi Mo, Bi Huang","doi":"10.1089/big.2021.0251","DOIUrl":"10.1089/big.2021.0251","url":null,"abstract":"<p><p>Traffic speed prediction plays a fundamental role in traffic management and driving route planning. However, timely accurate traffic speed prediction is challenging as it is affected by complex spatial and temporal correlations. Most existing works cannot simultaneously model spatial and temporal correlations in traffic data, resulting in unsatisfactory prediction performance. In this article, we propose a novel hybrid deep learning approach, named HDL4TSP, to predict traffic speed in each region of a city, which consists of an input layer, a spatial layer, a temporal layer, a fusion layer, and an output layer. Specifically, first, the spatial layer employs graph convolutional networks to capture spatial near dependencies and spatial distant dependencies in the spatial dimension. Second, the temporal layer employs convolutional long short-term memory (ConvLSTM) networks to model closeness, daily periodicity, and weekly periodicity in the temporal dimension. Third, the fusion layer designs a fusion component to merge the outputs of ConvLSTM networks. Finally, we conduct extensive experiments and experimental results to show that HDL4TSP outperforms four baselines on two real-world data sets.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"377-389"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39880866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2023-08-01DOI: 10.1089/big.2021.0473
Dibin Shan, Xuehui Du, Wenjuan Wang, Aodi Liu, Na Wang
Context information is the key element to realizing dynamic access control of big data. However, existing context-aware access control (CAAC) methods do not support automatic context awareness and cannot automatically model and reason about context relationships. To solve these problems, this article proposes a weighted GraphSAGE-based context-aware approach for big data access control. First, graph modeling is performed on the access record data set and transforms the access control context-awareness problem into a graph neural network (GNN) node learning problem. Then, a GNN model WGraphSAGE is proposed to achieve automatic context awareness and automatic generation of CAAC rules. Finally, weighted neighbor sampling and weighted aggregation algorithms are designed for the model to realize automatic modeling and reasoning of node relationships and relationship strengths simultaneously in the graph node learning process. The experiment results show that the proposed method has obvious advantages in context awareness and context relationship reasoning compared with similar GNN models. Meanwhile, it obtains better results in dynamic access control decisions than the existing CAAC models.
{"title":"A Weighted GraphSAGE-Based Context-Aware Approach for Big Data Access Control.","authors":"Dibin Shan, Xuehui Du, Wenjuan Wang, Aodi Liu, Na Wang","doi":"10.1089/big.2021.0473","DOIUrl":"10.1089/big.2021.0473","url":null,"abstract":"<p><p>Context information is the key element to realizing dynamic access control of big data. However, existing context-aware access control (CAAC) methods do not support automatic context awareness and cannot automatically model and reason about context relationships. To solve these problems, this article proposes a weighted GraphSAGE-based context-aware approach for big data access control. First, graph modeling is performed on the access record data set and transforms the access control context-awareness problem into a graph neural network (GNN) node learning problem. Then, a GNN model WGraphSAGE is proposed to achieve automatic context awareness and automatic generation of CAAC rules. Finally, weighted neighbor sampling and weighted aggregation algorithms are designed for the model to realize automatic modeling and reasoning of node relationships and relationship strengths simultaneously in the graph node learning process. The experiment results show that the proposed method has obvious advantages in context awareness and context relationship reasoning compared with similar GNN models. Meanwhile, it obtains better results in dynamic access control decisions than the existing CAAC models.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"390-411"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9922924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-07-31DOI: 10.1089/big.2024.59218.kpa
Farhad Pourkamali-Anaraki
{"title":"Special Issue: Big Scientific Data and Machine Learning in Science and Engineering.","authors":"Farhad Pourkamali-Anaraki","doi":"10.1089/big.2024.59218.kpa","DOIUrl":"10.1089/big.2024.59218.kpa","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"269"},"PeriodicalIF":2.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141857096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-03-22DOI: 10.1089/big.2022.0050
Vijay Srinivas Tida, Sonya Hsu, Xiali Hei
An efficient fake news detector becomes essential as the accessibility of social media platforms increases rapidly. Previous studies mainly focused on designing the models solely based on individual data sets and might suffer from degradable performance. Therefore, developing a robust model for a combined data set with diverse knowledge becomes crucial. However, designing the model with a combined data set requires extensive training time and sequential workload to obtain optimal performance without having some prior knowledge about the model's parameters. The presented study here will help solve these issues by introducing the unified training strategy to have a base structure for the classifier and all hyperparameters from individual models using a pretrained transformer model. The performance of the proposed model is noted using three publicly available data sets, namely ISOT and others from the Kaggle website. The results indicate that the proposed unified training strategy surpassed the existing models such as Random Forests, convolutional neural networks, and long short-term memory, with 97% accuracy and achieved the F1 score of 0.97. Furthermore, there was a significant reduction in training time by almost 1.5 to 1.8 × by removing words lower than three letters from the input samples. We also did extensive performance analysis by varying the number of encoder blocks to build compact models and trained on the combined data set. We justify that reducing encoder blocks resulted in lower performance from the obtained results.
{"title":"A Unified Training Process for Fake News Detection Based on Finetuned Bidirectional Encoder Representation from Transformers Model.","authors":"Vijay Srinivas Tida, Sonya Hsu, Xiali Hei","doi":"10.1089/big.2022.0050","DOIUrl":"10.1089/big.2022.0050","url":null,"abstract":"<p><p>An efficient fake news detector becomes essential as the accessibility of social media platforms increases rapidly. Previous studies mainly focused on designing the models solely based on individual data sets and might suffer from degradable performance. Therefore, developing a robust model for a combined data set with diverse knowledge becomes crucial. However, designing the model with a combined data set requires extensive training time and sequential workload to obtain optimal performance without having some prior knowledge about the model's parameters. The presented study here will help solve these issues by introducing the unified training strategy to have a base structure for the classifier and all hyperparameters from individual models using a pretrained transformer model. The performance of the proposed model is noted using three publicly available data sets, namely ISOT and others from the Kaggle website. The results indicate that the proposed unified training strategy surpassed the existing models such as Random Forests, convolutional neural networks, and long short-term memory, with 97% accuracy and achieved the F1 score of 0.97. Furthermore, there was a significant reduction in training time by almost 1.5 to 1.8 × by removing words lower than three letters from the input samples. We also did extensive performance analysis by varying the number of encoder blocks to build compact models and trained on the combined data set. We justify that reducing encoder blocks resulted in lower performance from the obtained results.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"331-342"},"PeriodicalIF":2.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9150389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-04DOI: 10.1089/big.2022.0086
Derya Turfan, Bulent Altunkaynak, Özgür Yeniay
Over the years, many studies have been carried out to reduce and eliminate the effects of diseases on human health. Gene expression data sets play a critical role in diagnosing and treating diseases. These data sets consist of thousands of genes and a small number of sample sizes. This situation creates the curse of dimensionality and it becomes problematic to analyze such data sets. One of the most effective strategies to solve this problem is feature selection methods. Feature selection is a preprocessing step to improve classification performance by selecting the most relevant and informative features while increasing the accuracy of classification. In this article, we propose a new statistically based filter method for the feature selection approach named Effective Range-based Feature Selection Algorithm (FSAER). As an extension of the previous Effective Range based Gene Selection (ERGS) and Improved Feature Selection based on Effective Range (IFSER) algorithms, our novel method includes the advantages of both methods while taking into account the disjoint area. To illustrate the efficacy of the proposed algorithm, the experiments have been conducted on six benchmark gene expression data sets. The results of the FSAER and the other filter methods have been compared in terms of classification accuracies to demonstrate the effectiveness of the proposed method. For classification methods, support vector machines, naive Bayes classifier, and k-nearest neighbor algorithms have been used.
多年来,为了减少和消除疾病对人类健康的影响,人们开展了许多研究。基因表达数据集在诊断和治疗疾病方面发挥着至关重要的作用。这些数据集由数千个基因和少量样本组成。这种情况造成了 "维度诅咒",使分析这类数据集成为难题。解决这一问题的最有效策略之一就是特征选择方法。特征选择是一种预处理步骤,通过选择最相关、信息量最大的特征来提高分类性能,同时提高分类的准确性。在本文中,我们为特征选择方法提出了一种新的基于统计的过滤方法,命名为基于有效范围的特征选择算法(FSAER)。作为之前基于有效范围的基因选择算法(ERGS)和基于有效范围的改进特征选择算法(IFSER)的扩展,我们的新方法既包含了这两种方法的优点,又考虑到了不相交区域。为了说明所提算法的有效性,我们在六个基准基因表达数据集上进行了实验。通过比较 FSAER 和其他滤波方法的分类准确率,证明了所提方法的有效性。在分类方法中,使用了支持向量机、天真贝叶斯分类器和 k 近邻算法。
{"title":"A New Filter Approach Based on Effective Ranges for Classification of Gene Expression Data.","authors":"Derya Turfan, Bulent Altunkaynak, Özgür Yeniay","doi":"10.1089/big.2022.0086","DOIUrl":"10.1089/big.2022.0086","url":null,"abstract":"<p><p>Over the years, many studies have been carried out to reduce and eliminate the effects of diseases on human health. Gene expression data sets play a critical role in diagnosing and treating diseases. These data sets consist of thousands of genes and a small number of sample sizes. This situation creates the curse of dimensionality and it becomes problematic to analyze such data sets. One of the most effective strategies to solve this problem is feature selection methods. Feature selection is a preprocessing step to improve classification performance by selecting the most relevant and informative features while increasing the accuracy of classification. In this article, we propose a new statistically based filter method for the feature selection approach named Effective Range-based Feature Selection Algorithm (FSAER). As an extension of the previous Effective Range based Gene Selection (ERGS) and Improved Feature Selection based on Effective Range (IFSER) algorithms, our novel method includes the advantages of both methods while taking into account the disjoint area. To illustrate the efficacy of the proposed algorithm, the experiments have been conducted on six benchmark gene expression data sets. The results of the FSAER and the other filter methods have been compared in terms of classification accuracies to demonstrate the effectiveness of the proposed method. For classification methods, support vector machines, naive Bayes classifier, and k-nearest neighbor algorithms have been used.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"312-330"},"PeriodicalIF":2.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10211345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-03-07DOI: 10.1089/big.2022.0120
Enes Gul, Mir Jafar Sadegh Safari
Sediment transport modeling is an important problem to minimize sedimentation in open channels that could lead to unexpected operation expenses. From an engineering perspective, the development of accurate models based on effective variables involved for flow velocity computation could provide a reliable solution in channel design. Furthermore, validity of sediment transport models is linked to the range of data used for the model development. Existing design models were established on the limited data ranges. Thus, the present study aimed to utilize all experimental data available in the literature, including recently published datasets that covered an extensive range of hydraulic properties. Extreme learning machine (ELM) algorithm and generalized regularized extreme learning machine (GRELM) were implemented for the modeling, and then, particle swarm optimization (PSO) and gradient-based optimizer (GBO) were utilized for the hybridization of ELM and GRELM. GRELM-PSO and GRELM-GBO findings were compared to the standalone ELM, GRELM, and existing regression models to determine their accurate computations. The analysis of the models demonstrated the robustness of the models that incorporate channel parameter. The poor results of some existing regression models seem to be linked to the disregarding of the channel parameter. Statistical analysis of the model outcomes illustrated the outperformance of GRELM-GBO in contrast to the ELM, GRELM, GRELM-PSO, and regression models, although GRELM-GBO performed slightly better when compared to the GRELM-PSO counterpart. It was found that the mean accuracy of GRELM-GBO was 18.5% better when compared to the best regression model. The promising findings of the current study not only may encourage the use of recommended algorithms for channel design in practice but also may further the application of novel ELM-based methods in alternative environmental problems.
泥沙输运模型是一个重要问题,可最大限度地减少明渠中的泥沙淤积,从而减少意外的运行费用。从工程角度看,根据流速计算所涉及的有效变量开发精确模型,可为渠道设计提供可靠的解决方案。此外,泥沙输运模型的有效性与模型开发所使用的数据范围有关。现有的设计模型是在有限的数据范围内建立的。因此,本研究旨在利用文献中的所有实验数据,包括最近发表的涵盖广泛水力特性的数据集。在建模过程中采用了极限学习机(ELM)算法和广义正则化极限学习机(GRELM),然后利用粒子群优化(PSO)和基于梯度的优化器(GBO)对 ELM 和 GRELM 进行混合。GRELM-PSO 和 GRELM-GBO 的结果与独立的 ELM、GRELM 和现有回归模型进行了比较,以确定其计算的准确性。对模型的分析表明,包含信道参数的模型具有稳健性。一些现有回归模型的结果不佳,似乎与忽略信道参数有关。对模型结果的统计分析表明,GRELM-GBO 的性能优于 ELM、GRELM、GRELM-PSO 和回归模型,但 GRELM-GBO 的性能略高于 GRELM-PSO。研究发现,与最佳回归模型相比,GRELM-GBO 的平均准确率高出 18.5%。当前研究的良好结果不仅可以鼓励在实践中使用推荐算法进行通道设计,还可以进一步推动基于 ELM 的新型方法在其他环境问题中的应用。
{"title":"Hybrid Generalized Regularized Extreme Learning Machine Through Gradient-Based Optimizer Model for Self-Cleansing Nondeposition with Clean Bed Mode of Sediment Transport.","authors":"Enes Gul, Mir Jafar Sadegh Safari","doi":"10.1089/big.2022.0120","DOIUrl":"10.1089/big.2022.0120","url":null,"abstract":"<p><p>Sediment transport modeling is an important problem to minimize sedimentation in open channels that could lead to unexpected operation expenses. From an engineering perspective, the development of accurate models based on effective variables involved for flow velocity computation could provide a reliable solution in channel design. Furthermore, validity of sediment transport models is linked to the range of data used for the model development. Existing design models were established on the limited data ranges. Thus, the present study aimed to utilize all experimental data available in the literature, including recently published datasets that covered an extensive range of hydraulic properties. Extreme learning machine (ELM) algorithm and generalized regularized extreme learning machine (GRELM) were implemented for the modeling, and then, particle swarm optimization (PSO) and gradient-based optimizer (GBO) were utilized for the hybridization of ELM and GRELM. GRELM-PSO and GRELM-GBO findings were compared to the standalone ELM, GRELM, and existing regression models to determine their accurate computations. The analysis of the models demonstrated the robustness of the models that incorporate channel parameter. The poor results of some existing regression models seem to be linked to the disregarding of the channel parameter. Statistical analysis of the model outcomes illustrated the outperformance of GRELM-GBO in contrast to the ELM, GRELM, GRELM-PSO, and regression models, although GRELM-GBO performed slightly better when compared to the GRELM-PSO counterpart. It was found that the mean accuracy of GRELM-GBO was 18.5% better when compared to the best regression model. The promising findings of the current study not only may encourage the use of recommended algorithms for channel design in practice but also may further the application of novel ELM-based methods in alternative environmental problems.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"282-298"},"PeriodicalIF":2.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10861174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-03-14DOI: 10.1089/big.2022.0125
Babak Vaheddoost, Shervin Rahimzadeh Arashloo, Mir Jafar Sadegh Safari
A joint determination of horizontal and vertical movement of water through porous medium is addressed in this study through fast multi-output relevance vector regression (FMRVR). To do this, an experimental data set conducted in a sand box with 300 × 300 × 150 mm dimensions made of Plexiglas is used. A random mixture of sand having size of 0.5-1 mm is used to simulate the porous medium. Within the experiments, 2, 3, 7, and 12 cm walls are used together with different injection locations as 130.7, 91.3, and 51.8 mm measured from the cutoff wall at the upstream. Then, the Cartesian coordinated of the tracer, time interval, length of the wall in each setup, and two dummy variables for determination of the initial point are considered as independent variables for joint estimation of horizontal and vertical velocity of water movement in the porous medium. Alternatively, the multi-linear regression, random forest, and the support vector regression approaches are used to alternate the results obtained by the FMRVR method. It was concluded that the FMRVR outperforms the other models, while the uncertainty in estimation of horizontal penetration is larger than the vertical one.
{"title":"Vertical and Horizontal Water Penetration Velocity Modeling in Nonhomogenous Soil Using Fast Multi-Output Relevance Vector Regression.","authors":"Babak Vaheddoost, Shervin Rahimzadeh Arashloo, Mir Jafar Sadegh Safari","doi":"10.1089/big.2022.0125","DOIUrl":"10.1089/big.2022.0125","url":null,"abstract":"<p><p>A joint determination of horizontal and vertical movement of water through porous medium is addressed in this study through fast multi-output relevance vector regression (FMRVR). To do this, an experimental data set conducted in a sand box with 300 × 300 × 150 mm dimensions made of Plexiglas is used. A random mixture of sand having size of 0.5-1 mm is used to simulate the porous medium. Within the experiments, 2, 3, 7, and 12 cm walls are used together with different injection locations as 130.7, 91.3, and 51.8 mm measured from the cutoff wall at the upstream. Then, the Cartesian coordinated of the tracer, time interval, length of the wall in each setup, and two dummy variables for determination of the initial point are considered as independent variables for joint estimation of horizontal and vertical velocity of water movement in the porous medium. Alternatively, the multi-linear regression, random forest, and the support vector regression approaches are used to alternate the results obtained by the FMRVR method. It was concluded that the FMRVR outperforms the other models, while the uncertainty in estimation of horizontal penetration is larger than the vertical one.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"299-311"},"PeriodicalIF":2.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9105192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}