Software Defined Network (SDN) facilitates a centralized control management of devices in network, which solves many issues in the old network. However, as the modern era generates a vast amount of data, the controller in an SDN could become overloaded. Numerous investigators have offered their opinions on how to address the issue of controller overloading in order to resolve it. Mostly the traditional models consider two or three parameters to evenly distribute the load in SDN, which is not sufficient for precise load balancing strategy. Hence, an effective load balancing model is in need that considers different parameters. Considering this aspect, this paper presents a new load balancing model in SDN is introduced by following three major phases: (a) work load prediction, (b) optimal load balancing, and (c) switch migration. Initially, work load prediction is done via improved Deep Maxout Network. COA and BWO are conceptually combined in the proposed hybrid optimization technique known as Coati Updated Black Widow (CUBW). Then, the optimal load balancing is done via hybrid optimization named Coati Updated Black Widow (CUBW) Optimization Algorithm. The optimal load balancing is done by considering migration time, migration cost, distance and load balancing parameters like server load, response time and turnaround time. Finally, switch migration is carried out by considering the constraints like migration time, migration cost, and distance. The migration time of the proposed method achieves lower value, which is 27.3%, 40.8%, 24.40%, 41.8%, 42.8%, 42.2%, 40.0%, and 41.6% higher than the previous models like BMO, BES, AOA, TDO, CSO, GLSOM, HDD-PLB, BWO and COA respectively. Finally, the performance of proposed work is validated over the conventional methods in terms of different analysis.
软件定义网络(SDN)有利于集中控制管理网络中的设备,解决了旧网络中的许多问题。然而,由于现代社会产生了大量数据,SDN 中的控制器可能会超载。如何解决控制器过载问题,众多研究者提出了自己的看法。传统模型大多考虑两个或三个参数来平均分配 SDN 中的负载,但这不足以实现精确的负载平衡策略。因此,需要一种考虑不同参数的有效负载平衡模型。考虑到这一点,本文通过以下三个主要阶段介绍了一种新的 SDN 负载平衡模型:(a)工作负载预测;(b)优化负载平衡;(c)交换机迁移。最初,工作负载预测是通过改进的深度 Maxout 网络完成的。COA 和 BWO 在概念上被结合到所提出的混合优化技术中,即 Coati Updated Black Widow (CUBW)。然后,通过名为 Coati Updated Black Widow (CUBW) 优化算法的混合优化技术实现最佳负载平衡。最佳负载平衡是通过考虑迁移时间、迁移成本、距离以及服务器负载、响应时间和周转时间等负载平衡参数来实现的。最后,通过考虑迁移时间、迁移成本和迁移距离等约束条件,进行交换机迁移。与 BMO、BES、AOA、TDO、CSO、GLSOM、HDD-PLB、BWO 和 COA 等先前的模型相比,提议方法的迁移时间达到了较低的值,分别为 27.3%、40.8%、24.40%、41.8%、42.8%、42.2%、40.0% 和 41.6%。最后,从不同的分析角度验证了所提方法优于传统方法的性能。
{"title":"Combined optimization strategy: CUBW for load balancing in software defined network","authors":"Sonam Sharma, Dambarudhar Seth, Manoj Kapil","doi":"10.3233/web-230263","DOIUrl":"https://doi.org/10.3233/web-230263","url":null,"abstract":"Software Defined Network (SDN) facilitates a centralized control management of devices in network, which solves many issues in the old network. However, as the modern era generates a vast amount of data, the controller in an SDN could become overloaded. Numerous investigators have offered their opinions on how to address the issue of controller overloading in order to resolve it. Mostly the traditional models consider two or three parameters to evenly distribute the load in SDN, which is not sufficient for precise load balancing strategy. Hence, an effective load balancing model is in need that considers different parameters. Considering this aspect, this paper presents a new load balancing model in SDN is introduced by following three major phases: (a) work load prediction, (b) optimal load balancing, and (c) switch migration. Initially, work load prediction is done via improved Deep Maxout Network. COA and BWO are conceptually combined in the proposed hybrid optimization technique known as Coati Updated Black Widow (CUBW). Then, the optimal load balancing is done via hybrid optimization named Coati Updated Black Widow (CUBW) Optimization Algorithm. The optimal load balancing is done by considering migration time, migration cost, distance and load balancing parameters like server load, response time and turnaround time. Finally, switch migration is carried out by considering the constraints like migration time, migration cost, and distance. The migration time of the proposed method achieves lower value, which is 27.3%, 40.8%, 24.40%, 41.8%, 42.8%, 42.2%, 40.0%, and 41.6% higher than the previous models like BMO, BES, AOA, TDO, CSO, GLSOM, HDD-PLB, BWO and COA respectively. Finally, the performance of proposed work is validated over the conventional methods in terms of different analysis.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"14 4","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this competitive world, companies should sustain good relationships with their consumers. CRM (customer relationship management) program can improve the company’s customer satisfaction; to satisfy customer need different processes and technique are established to make the CRM more effective. This research is proposed to determine the relationship between customer loyalty and retention. Also, this research examines the impact of Customer Relationship Management (CRM) on Customer Satisfaction. The target population of this study is customers of the tourism industry in India ( n = 300). Then, regression analysis is carried out in order to discover the link between the variables. This study result shows that service quality and employee behavior of customer need and satisfaction with the effect of different significant of positive relation of both the variables. To make the customer satisfied and to retain their company the CRM should be strong and reliable with the consumers. CRM plays a vital role in increasing market share, high productivity, improving in-depth customer knowledge, and customer satisfaction to increase consumer loyalty to the company to have a clear view of who is their customer, what are the need of their customer and how can satisfy their needs and wants their customers.
{"title":"The Customer Loyalty vs. Customer Retention: The Impact of Customer Relationship Management on Customer Satisfaction","authors":"Ram Kumar Dwivedi, Shailee Lohmor Choudhary, R. Dixit, Zainab Sahiba, Satyaprakash Naik","doi":"10.3233/web-230098","DOIUrl":"https://doi.org/10.3233/web-230098","url":null,"abstract":"In this competitive world, companies should sustain good relationships with their consumers. CRM (customer relationship management) program can improve the company’s customer satisfaction; to satisfy customer need different processes and technique are established to make the CRM more effective. This research is proposed to determine the relationship between customer loyalty and retention. Also, this research examines the impact of Customer Relationship Management (CRM) on Customer Satisfaction. The target population of this study is customers of the tourism industry in India ( n = 300). Then, regression analysis is carried out in order to discover the link between the variables. This study result shows that service quality and employee behavior of customer need and satisfaction with the effect of different significant of positive relation of both the variables. To make the customer satisfied and to retain their company the CRM should be strong and reliable with the consumers. CRM plays a vital role in increasing market share, high productivity, improving in-depth customer knowledge, and customer satisfaction to increase consumer loyalty to the company to have a clear view of who is their customer, what are the need of their customer and how can satisfy their needs and wants their customers.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"56 16","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supply chain management (SCM) is most significant place of concentration in various corporate circumstances. SCM has both designed and monitored numerous tasks with the following phases such as allocation, creation, product sourcing, and warehousing. Based on this perspective, the privacy of data flow is more important among producers, suppliers, and customers to ensure the responsibility of the market. This work aims to develop a novel Improved Digital Navigator Assessment (DNA)-based Self Improved Pelican Optimization Algorithm (IDNA-based SIPOA model) for secured data transmission in SCM via blockchain. An improved DNA cryptosystem is done for the process of preservation for data. The original message is encrypted by Improved Advanced Encryption Standard (IAES). The optimal key generation is done by the proposed SIPOA algorithm. The efficiency of the adopted model has been analyzed with conventional methods with regard to security for secured data exchange in SCM. The proposed IDNA-based SIPOA obtained the lowest value for the 40% cypher text is 0.71, while the BWO is 0.79, DOA is 0.77, TWOA is 0.84, BOA is 0.83, POA is 0.86, SDSM is 0.88, DNASF is 0.82 and FSA-SLnO is 0.78, respectively.
{"title":"Supply chain management with secured data transmission via improved DNA cryptosystem","authors":"P. Lahane, Shivaji R. Lahane","doi":"10.3233/web-230105","DOIUrl":"https://doi.org/10.3233/web-230105","url":null,"abstract":"Supply chain management (SCM) is most significant place of concentration in various corporate circumstances. SCM has both designed and monitored numerous tasks with the following phases such as allocation, creation, product sourcing, and warehousing. Based on this perspective, the privacy of data flow is more important among producers, suppliers, and customers to ensure the responsibility of the market. This work aims to develop a novel Improved Digital Navigator Assessment (DNA)-based Self Improved Pelican Optimization Algorithm (IDNA-based SIPOA model) for secured data transmission in SCM via blockchain. An improved DNA cryptosystem is done for the process of preservation for data. The original message is encrypted by Improved Advanced Encryption Standard (IAES). The optimal key generation is done by the proposed SIPOA algorithm. The efficiency of the adopted model has been analyzed with conventional methods with regard to security for secured data exchange in SCM. The proposed IDNA-based SIPOA obtained the lowest value for the 40% cypher text is 0.71, while the BWO is 0.79, DOA is 0.77, TWOA is 0.84, BOA is 0.83, POA is 0.86, SDSM is 0.88, DNASF is 0.82 and FSA-SLnO is 0.78, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"36 10","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cancers are genetically diversified, so anticancer treatments have different levels of efficacy on people due to genetic differences. The main objective of this work is to predict the anticancer drug efficiency for colorectal cancer patients to reduce the mortality rates and provides immune energy for the patients. This paper proposes a novel anti-cancer drug efficacy system in colorectal cancer patients. The input data gene is normalized with the Min–Max normalization technique that normalizes the data in distinct scales. Subsequently, proposes an improved entropy-based feature to evaluate the uncertainty distribution of data, in which it induces weight to overcome the issue of computational complexity. Along with this feature, a correlation-based feature and statistical features are also retrieved. Subsequently, proposes a Recursive Feature Elimination with Hybrid Machine Learning (RFEHML) mechanism for selecting the appropriate feature set by eliminating the recursive features with the aid of hybrid Machine Learning strategies that combine decision tree and logistic regression. Also, the Gini impurity is employed for ranking the feature and selecting the maximum importance score by eliminating the least acquired importance score. Further, proposes a hybrid model for predicting the drug efficiency with the trained feature set. The hybrid model comprises of Long Short-Term Memory (LSTM) and Updated Rectified Linear Unit-Deep Convolutional Neural Network (UReLU-DCNN) model, in which DCNN is modified by updating the activation function at the fully connected layer. Consequently, the learned feature predicts the drug efficacy of anti-cancer in colorectal cancer patients by determining whether the patient is a responder or non-responder of the drug. Finally, the performance of the proposed RFEHML model is compared with other traditional approaches. It is found that the developed method has higher accuracy for each learning percentage, with values of 60LP = 92.48%, 70LP = 94.28%, 80LP = 95.24%, and 90LP = 96.86%, respectively.
{"title":"Hybrid deep model for predicting anti-cancer drug efficacy in colorectal cancer patients","authors":"A. Karthikeyan, S. Jothilakshmi, S. Suthir","doi":"10.3233/web-230260","DOIUrl":"https://doi.org/10.3233/web-230260","url":null,"abstract":"Cancers are genetically diversified, so anticancer treatments have different levels of efficacy on people due to genetic differences. The main objective of this work is to predict the anticancer drug efficiency for colorectal cancer patients to reduce the mortality rates and provides immune energy for the patients. This paper proposes a novel anti-cancer drug efficacy system in colorectal cancer patients. The input data gene is normalized with the Min–Max normalization technique that normalizes the data in distinct scales. Subsequently, proposes an improved entropy-based feature to evaluate the uncertainty distribution of data, in which it induces weight to overcome the issue of computational complexity. Along with this feature, a correlation-based feature and statistical features are also retrieved. Subsequently, proposes a Recursive Feature Elimination with Hybrid Machine Learning (RFEHML) mechanism for selecting the appropriate feature set by eliminating the recursive features with the aid of hybrid Machine Learning strategies that combine decision tree and logistic regression. Also, the Gini impurity is employed for ranking the feature and selecting the maximum importance score by eliminating the least acquired importance score. Further, proposes a hybrid model for predicting the drug efficiency with the trained feature set. The hybrid model comprises of Long Short-Term Memory (LSTM) and Updated Rectified Linear Unit-Deep Convolutional Neural Network (UReLU-DCNN) model, in which DCNN is modified by updating the activation function at the fully connected layer. Consequently, the learned feature predicts the drug efficacy of anti-cancer in colorectal cancer patients by determining whether the patient is a responder or non-responder of the drug. Finally, the performance of the proposed RFEHML model is compared with other traditional approaches. It is found that the developed method has higher accuracy for each learning percentage, with values of 60LP = 92.48%, 70LP = 94.28%, 80LP = 95.24%, and 90LP = 96.86%, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"2 3","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138589965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stock market forecasting remains a difficult problem in the economics industry due to its incredible stochastic nature. The creation of such an expert system aids investors in making investment decisions about a certain company. Due to the complexity of the stock market, using a single data source is insufficient to accurately reflect all of the variables that influence stock fluctuations. However, predicting stock market movement is a challenging undertaking that requires extensive data analysis, particularly from a big data perspective. In order to address these problems and produce a feasible solution, appropriate statistical models and artificially intelligent algorithms are needed. This paper aims to propose a novel stock market prediction by the following four stages; they are, preprocessing, feature extraction, improved feature level fusion and prediction. The input data is first put through a preparation step in which stock, news, and Twitter data (related to the COVID-19 epidemic) are processed. Under the big data perspective, the input data is taken into account. These pre-processed data are then put through the feature extraction, The improved aspect-based lexicon generation, PMI, and n-gram-based features in this case are derived from the news and Twitter data, while technical indicator-based features are derived from the stock data. The improved feature-level fusion phase is then applied to the extracted features. The ensemble classifiers, which include DBN, CNN, and DRN, were proposed during the prediction phase. Additionally, a SI-MRFO model is suggested to enhance the efficiency of the prediction model by adjusting the best classifier weights. Finally, SI-MRFO model’s effectiveness compared to the existing models with regard to MAE, MAPE, MSE and MSLE. The SI-MRFO accomplished the minimal MAE rate for the 90th learning percentage is approximately 0.015 while other models acquire maximum ratings.
{"title":"Stock market prediction-COVID-19 scenario with lexicon-based approach","authors":"Y. Ayyappa, A.P. Siva Kumar","doi":"10.3233/web-230092","DOIUrl":"https://doi.org/10.3233/web-230092","url":null,"abstract":"Stock market forecasting remains a difficult problem in the economics industry due to its incredible stochastic nature. The creation of such an expert system aids investors in making investment decisions about a certain company. Due to the complexity of the stock market, using a single data source is insufficient to accurately reflect all of the variables that influence stock fluctuations. However, predicting stock market movement is a challenging undertaking that requires extensive data analysis, particularly from a big data perspective. In order to address these problems and produce a feasible solution, appropriate statistical models and artificially intelligent algorithms are needed. This paper aims to propose a novel stock market prediction by the following four stages; they are, preprocessing, feature extraction, improved feature level fusion and prediction. The input data is first put through a preparation step in which stock, news, and Twitter data (related to the COVID-19 epidemic) are processed. Under the big data perspective, the input data is taken into account. These pre-processed data are then put through the feature extraction, The improved aspect-based lexicon generation, PMI, and n-gram-based features in this case are derived from the news and Twitter data, while technical indicator-based features are derived from the stock data. The improved feature-level fusion phase is then applied to the extracted features. The ensemble classifiers, which include DBN, CNN, and DRN, were proposed during the prediction phase. Additionally, a SI-MRFO model is suggested to enhance the efficiency of the prediction model by adjusting the best classifier weights. Finally, SI-MRFO model’s effectiveness compared to the existing models with regard to MAE, MAPE, MSE and MSLE. The SI-MRFO accomplished the minimal MAE rate for the 90th learning percentage is approximately 0.015 while other models acquire maximum ratings.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"114 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138622322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now a days, the Internet of Things (IoT) plays a vital role in every industry including agriculture due to its widespread and easy integrations. The agricultural methods are incorporated with IoT technologies for significant growth in agricultural fields. IoT is utilized to support farmers in using their resources effectively and support decision-making systems with better field monitoring techniques. The data collected from IoT-based agricultural systems are highly vulnerable to attack, hence to address this issue it is necessary to employ an authentication scheme. In this paper, Auth Key_Deep Convolutional Neural Network (Auth Key_DCNN) is designed to promote secure data sharing in IoT-enabled agriculture systems. The different entities, namely sensors, Private Key Generator (PKG), controller, and data user are initially considered and the parameters are randomly initialized. The entities are registered and by using DCNN a secret key is generated in PKG. The encryption of transmitted data is performed in the data protection phase during the protection of data between the controller and the user. Additionally, the performance of the designed model is estimated, where the experimental results revealed that the Auth Key_DCNN model recorded superior performance with a minimal computational cost of 142.56, a memory usage of 49.5 MB, and a computational time of 1.34 sec.
{"title":"A novel authentication scheme for secure data sharing in IoT enabled agriculture","authors":"Arun A. Kumbi, M. Birje","doi":"10.3233/web-230244","DOIUrl":"https://doi.org/10.3233/web-230244","url":null,"abstract":"Now a days, the Internet of Things (IoT) plays a vital role in every industry including agriculture due to its widespread and easy integrations. The agricultural methods are incorporated with IoT technologies for significant growth in agricultural fields. IoT is utilized to support farmers in using their resources effectively and support decision-making systems with better field monitoring techniques. The data collected from IoT-based agricultural systems are highly vulnerable to attack, hence to address this issue it is necessary to employ an authentication scheme. In this paper, Auth Key_Deep Convolutional Neural Network (Auth Key_DCNN) is designed to promote secure data sharing in IoT-enabled agriculture systems. The different entities, namely sensors, Private Key Generator (PKG), controller, and data user are initially considered and the parameters are randomly initialized. The entities are registered and by using DCNN a secret key is generated in PKG. The encryption of transmitted data is performed in the data protection phase during the protection of data between the controller and the user. Additionally, the performance of the designed model is estimated, where the experimental results revealed that the Auth Key_DCNN model recorded superior performance with a minimal computational cost of 142.56, a memory usage of 49.5 MB, and a computational time of 1.34 sec.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"77 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139225993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IoT-Fog computing provides a wide range of services for end-based IoT systems. End IoT devices interface with cloud nodes and fog nodes to manage client tasks. Critical attacks like DDoS and other security risks are more likely to compromise IoT end devices while they are collecting data between the fog and the cloud layer. It’s important to find these network vulnerabilities early. By extracting features and placing the danger in the network, DL is crucial in predicting end-user behavior. However, deep learning cannot be carried out on Internet of Things devices because to their constrained calculation and storage capabilities. In this research, we suggest a three-stage Deep Hybrid Detection Model for Attack Detection in IoT-Fog Architecture. Improved Z-score normalization-based data preparation will be carried out in the initial step. On the basis of preprocessed data, features like IG, raw data, entropy, and enhanced MI are extracted in the second step. The collected characteristics are used as input to hybrid classifiers dubbed optimized Deep Maxout and Deep Belief Network (DBN) in the third step of the process to classify the assaults based on the input dataset. A hybrid optimization model called the BMUJFO (Blue Monkey Updated Jellyfish Optimization) technique is presented for the best Deep Maxout training. Additionally, the suggested model produced higher accuracy, precision, sensitivity, and specificity results, with values of 95.26 percent, 94.84%, 96.28%, and 97.84%, respectively.
物联网-雾计算为基于终端的物联网系统提供广泛的服务。终端物联网设备与云节点和雾节点对接,以管理客户端任务。当物联网终端设备在雾和云层之间收集数据时,DDoS 等关键攻击和其他安全风险更有可能危及这些设备。及早发现这些网络漏洞非常重要。通过提取特征并将危险置于网络中,DL 对预测终端用户行为至关重要。然而,由于计算和存储能力有限,深度学习无法在物联网设备上进行。在这项研究中,我们提出了一种用于物联网-雾架构中攻击检测的三阶段深度混合检测模型。第一步将进行基于 Z 分数归一化的改进数据准备。在预处理数据的基础上,第二步将提取 IG、原始数据、熵和增强 MI 等特征。在第三步中,收集到的特征将被用作混合分类器的输入,这些分类器被称为优化的深度 Maxout 和深度信念网络 (DBN),以便根据输入数据集对攻击进行分类。为了获得最佳的 Deep Maxout 训练效果,提出了一种名为 BMUJFO(Blue Monkey Updated Jellyfish Optimization)技术的混合优化模型。此外,建议的模型产生了更高的准确度、精确度、灵敏度和特异性结果,数值分别为 95.26%、94.84%、96.28% 和 97.84%。
{"title":"Deep hybrid model for attack detection in IoT-fog architecture with improved feature set and optimal training","authors":"N. Pokale, Pooja Sharma, Deepak T. Mane","doi":"10.3233/web-230187","DOIUrl":"https://doi.org/10.3233/web-230187","url":null,"abstract":"IoT-Fog computing provides a wide range of services for end-based IoT systems. End IoT devices interface with cloud nodes and fog nodes to manage client tasks. Critical attacks like DDoS and other security risks are more likely to compromise IoT end devices while they are collecting data between the fog and the cloud layer. It’s important to find these network vulnerabilities early. By extracting features and placing the danger in the network, DL is crucial in predicting end-user behavior. However, deep learning cannot be carried out on Internet of Things devices because to their constrained calculation and storage capabilities. In this research, we suggest a three-stage Deep Hybrid Detection Model for Attack Detection in IoT-Fog Architecture. Improved Z-score normalization-based data preparation will be carried out in the initial step. On the basis of preprocessed data, features like IG, raw data, entropy, and enhanced MI are extracted in the second step. The collected characteristics are used as input to hybrid classifiers dubbed optimized Deep Maxout and Deep Belief Network (DBN) in the third step of the process to classify the assaults based on the input dataset. A hybrid optimization model called the BMUJFO (Blue Monkey Updated Jellyfish Optimization) technique is presented for the best Deep Maxout training. Additionally, the suggested model produced higher accuracy, precision, sensitivity, and specificity results, with values of 95.26 percent, 94.84%, 96.28%, and 97.84%, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"111 5","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139229127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prevalence of violence against women and children is concerning, and the initial step is to raise awareness of this issue. Certain forms of detection based techniques are not frequently regarded both socially and culturally permissible. Designing and implementing effective approaches in secondary and supplementary avoidance simultaneously depends on the characterization and assessment. Given the greater incidence of instances and mortalities resulting developing an early detection system is essential. Consequently, violence against women and children is a problem of human health of pandemic proportions. As a result, the focus of this survey is to analyze the existing methods used to identify violence in photos or films. Here, 50 research papers are reviewed and their techniques employed, dataset, evaluation metrics, and publication year are analyzed. The study reviews the potential future research areas by examining the difficulties in identifying violence against women and children in literary works for researchers to overcome in order to produce better results.
{"title":"An empirical study of various detection based techniques with divergent learning’s","authors":"Bhagyashree Pramod Bendale, Swati Swati Dattatraya Shirke","doi":"10.3233/web-230103","DOIUrl":"https://doi.org/10.3233/web-230103","url":null,"abstract":"The prevalence of violence against women and children is concerning, and the initial step is to raise awareness of this issue. Certain forms of detection based techniques are not frequently regarded both socially and culturally permissible. Designing and implementing effective approaches in secondary and supplementary avoidance simultaneously depends on the characterization and assessment. Given the greater incidence of instances and mortalities resulting developing an early detection system is essential. Consequently, violence against women and children is a problem of human health of pandemic proportions. As a result, the focus of this survey is to analyze the existing methods used to identify violence in photos or films. Here, 50 research papers are reviewed and their techniques employed, dataset, evaluation metrics, and publication year are analyzed. The study reviews the potential future research areas by examining the difficulties in identifying violence against women and children in literary works for researchers to overcome in order to produce better results.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"56 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136311650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing and debugging have been the most significant steps of software development since it is tricky for engineers to create error-free software. Software testing takes place after coding with the goal of finding flaws. If errors are found, debugging would be done to identify the source of the errors so that they may be fixed. Detecting as well as locating defects are thus two essential stages in the creation of software. We have created a unique approach with the following two working phases to generate a minimized test suite that is capable of both detecting and localizing faults. In the initial test suite minimization process, the cases were generated and minimized based on the objectives such as D-score and coverage by the utilization of the proposed Blue Monkey Customized Black Widow (BMCBW) algorithm. After this test suite minimization, the fault validation is done which includes the process of fault detection and localization. For this fault validation, we have utilized an improved Long Short-Term Memory (LSTM). At 90% of the learning rate the accuracy of the presented work is 0.97%, 2.20%, 2.52%, 0.97% and 2.81% is better than the other extant models like AOA, COOT, BES, BMO and BWO methods. The results obtained proved that our Blue Monkey Customized Black Widow Optimization-based fault detection and localization approach can provide superior outcomes.
{"title":"Test suite optimization under multi-objective constraints for software fault detection and localization: Hybrid optimization based model","authors":"Adline Freeda R, Selvi Rajendran P","doi":"10.3233/web-220131","DOIUrl":"https://doi.org/10.3233/web-220131","url":null,"abstract":"Testing and debugging have been the most significant steps of software development since it is tricky for engineers to create error-free software. Software testing takes place after coding with the goal of finding flaws. If errors are found, debugging would be done to identify the source of the errors so that they may be fixed. Detecting as well as locating defects are thus two essential stages in the creation of software. We have created a unique approach with the following two working phases to generate a minimized test suite that is capable of both detecting and localizing faults. In the initial test suite minimization process, the cases were generated and minimized based on the objectives such as D-score and coverage by the utilization of the proposed Blue Monkey Customized Black Widow (BMCBW) algorithm. After this test suite minimization, the fault validation is done which includes the process of fault detection and localization. For this fault validation, we have utilized an improved Long Short-Term Memory (LSTM). At 90% of the learning rate the accuracy of the presented work is 0.97%, 2.20%, 2.52%, 0.97% and 2.81% is better than the other extant models like AOA, COOT, BES, BMO and BWO methods. The results obtained proved that our Blue Monkey Customized Black Widow Optimization-based fault detection and localization approach can provide superior outcomes.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"6 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73334772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blockchain technology is commonly used as a replicated and distributed database in different areas. In this paper, a smart home blockchain network connects smart homes through smart devices for reducing carbon footprint and thereby earning bitcoin value in the network. The network is composed of different smart homes interconnected with smart devices. The user makes a transaction request through the network layer and matches the user’s activity with the reward table located at the incentive layer to estimate the bitcoin value. Furthermore, the miner verifies the transaction and sends the bitcoin value to the user, and adds the respective block to the network structure. The optimal parameter used to estimate the bitcoin value is computed using the proposed Improved Invasive Weed Mayfly Optimization (IIWMO) algorithm. The developed method attained higher performance with the metrics, like coins earned, Annual Carbon Reduction (ACR), and fitness as 0.00357BTC, 23.891, and 0.6618 for 200 users. For 200 users the fitness obtained by the proposed method is 14.41%, 16.68%, and 11.68% higher when compared to existing approaches namely, Without optimization, IIWO, and MA, respectively.
{"title":"A Unique Approach for Performance Analysis of a Blockchain and Cryptocurrency based Carbon Footprint Reduction System","authors":"Ankit Panch, Dr. Om Prakash Sharma","doi":"10.3233/web-220049","DOIUrl":"https://doi.org/10.3233/web-220049","url":null,"abstract":"Blockchain technology is commonly used as a replicated and distributed database in different areas. In this paper, a smart home blockchain network connects smart homes through smart devices for reducing carbon footprint and thereby earning bitcoin value in the network. The network is composed of different smart homes interconnected with smart devices. The user makes a transaction request through the network layer and matches the user’s activity with the reward table located at the incentive layer to estimate the bitcoin value. Furthermore, the miner verifies the transaction and sends the bitcoin value to the user, and adds the respective block to the network structure. The optimal parameter used to estimate the bitcoin value is computed using the proposed Improved Invasive Weed Mayfly Optimization (IIWMO) algorithm. The developed method attained higher performance with the metrics, like coins earned, Annual Carbon Reduction (ACR), and fitness as 0.00357BTC, 23.891, and 0.6618 for 200 users. For 200 users the fitness obtained by the proposed method is 14.41%, 16.68%, and 11.68% higher when compared to existing approaches namely, Without optimization, IIWO, and MA, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"63 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89899668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}