首页 > 最新文献

Web Intelligence最新文献

英文 中文
Design of compound data acquisition gateway based on 5G network 基于5G网络的复合数据采集网关设计
IF 0.3 Q3 Computer Science Pub Date : 2023-08-02 DOI: 10.3233/web-220071
Jufen Hu, G. Lorenzini
With the wide application of industrial Internet of Things, the increasing amount of data and the complexity of data types, higher requirements are put forward for the performance of data acquisition gateway. In order to reduce the data acquisition time of the gateway and improve the data retrieval coverage of the gateway, a novel design method of composite data acquisition gateway based on 5G network is proposed. Based on the analysis of related technologies, the functional requirements of the composite data acquisition gateway are summarized, and the overall design of the gateway is completed. On this basis, the gateway hardware environment is constructed by designing the main control module, 5G module and FPGA program, and then the software program is designed by designing the data acquisition driver, 5G module driver, embedded software and protocol conversion process. The experimental results show that the data retrieval coverage of the gateway designed by this method is always above 92%, which is 6% higher than that of method 1. This shows that the method significantly improves the coverage of data search, speeds up the efficiency of data collection, and improves the performance of the data collection gateway, which proves the effectiveness and feasibility of the method and is conducive to promoting the intelligent development of the data collection gateway technology.
随着工业物联网的广泛应用,数据量越来越大,数据类型越来越复杂,对数据采集网关的性能提出了更高的要求。为了减少网关的数据采集时间,提高网关的数据检索覆盖率,提出了一种基于5G网络的复合数据采集网关设计方法。在分析相关技术的基础上,总结了复合数据采集网关的功能需求,完成了网关的总体设计。在此基础上,通过设计主控模块、5G模块和FPGA程序构建网关硬件环境,然后通过设计数据采集驱动程序、5G模块驱动程序、嵌入式软件和协议转换流程进行软件程序设计。实验结果表明,该方法设计的网关的数据检索覆盖率始终在92%以上,比方法1提高了6%。由此可见,该方法显著提高了数据搜索的覆盖范围,加快了数据采集的效率,提高了数据采集网关的性能,证明了该方法的有效性和可行性,有利于促进数据采集网关技术的智能化发展。
{"title":"Design of compound data acquisition gateway based on 5G network","authors":"Jufen Hu, G. Lorenzini","doi":"10.3233/web-220071","DOIUrl":"https://doi.org/10.3233/web-220071","url":null,"abstract":"With the wide application of industrial Internet of Things, the increasing amount of data and the complexity of data types, higher requirements are put forward for the performance of data acquisition gateway. In order to reduce the data acquisition time of the gateway and improve the data retrieval coverage of the gateway, a novel design method of composite data acquisition gateway based on 5G network is proposed. Based on the analysis of related technologies, the functional requirements of the composite data acquisition gateway are summarized, and the overall design of the gateway is completed. On this basis, the gateway hardware environment is constructed by designing the main control module, 5G module and FPGA program, and then the software program is designed by designing the data acquisition driver, 5G module driver, embedded software and protocol conversion process. The experimental results show that the data retrieval coverage of the gateway designed by this method is always above 92%, which is 6% higher than that of method 1. This shows that the method significantly improves the coverage of data search, speeds up the efficiency of data collection, and improves the performance of the data collection gateway, which proves the effectiveness and feasibility of the method and is conducive to promoting the intelligent development of the data collection gateway technology.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90297699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable routing in MANET with mobility prediction via long short-term memory 基于长短期记忆预测移动性的MANET可靠路由
IF 0.3 Q3 Computer Science Pub Date : 2023-07-27 DOI: 10.3233/web-220110
Manjula A. Biradar, Sujata Mallapure
A MANET consists of a self-configured group of transportable mobile nodes that lacks a central infrastructure to manage network traffic. To facilitate communication, govern route discovery, and manage resources, all moving nodes in multi-hop wireless networks (MANETs) work together. These networks struggle with dependability, energy consumption, and collision avoidance. The goal of this research project is to establish a new, dependable MANET routing model, where the selection of predictor nodes comes first. For selecting predictor nodes based on factors like distance, security (risk), Receiver Signal Strength Indicator (RSSI), Packet Delivery Ratio (PDR), and energy, the adaptive weighted clustering algorithm (AWCA) is used in this case. Using the Interfused Slime and Battle Royale Optimization with Arithmetic Crossover (IS&BRO–AC) model, the node with the lower weight is selected as the Cluster Head (CH). Additionally, mobility prediction is carried out, in which the node mobility is forecast using Improved Long Short Term Memory (LSTM) while taking distance and Receiver Signal Strength Indicator (RSSI) into account. Based on the forecast, trustworthy data transfer is implemented, ensuring more accurate and dependable MANET routing. The examination of RSSI, PDR, and other metrics is completed at the end.
MANET由一组自配置的可移动节点组成,这些节点缺乏管理网络流量的中心基础设施。为了方便通信、控制路由发现和管理资源,多跳无线网络(manet)中的所有移动节点都协同工作。这些网络在可靠性、能耗和避免碰撞等问题上挣扎。本研究的目标是建立一个新的、可靠的MANET路由模型,其中预测节点的选择是第一位的。基于距离、安全(风险)、RSSI (Receiver Signal Strength Indicator)、PDR (Packet Delivery Ratio)、能量等因素选择预测节点,采用自适应加权聚类算法(AWCA)。采用融合Slime和Battle Royale算法交叉优化(IS&BRO-AC)模型,选取权值较低的节点作为簇头(CH)。此外,还进行了移动性预测,在考虑距离和接收信号强度指标(RSSI)的情况下,利用改进长短期记忆(LSTM)预测节点的移动性。基于预测,实现了可信的数据传输,保证了更准确、可靠的MANET路由。最后完成RSSI、PDR和其他指标的检查。
{"title":"Reliable routing in MANET with mobility prediction via long short-term memory","authors":"Manjula A. Biradar, Sujata Mallapure","doi":"10.3233/web-220110","DOIUrl":"https://doi.org/10.3233/web-220110","url":null,"abstract":"A MANET consists of a self-configured group of transportable mobile nodes that lacks a central infrastructure to manage network traffic. To facilitate communication, govern route discovery, and manage resources, all moving nodes in multi-hop wireless networks (MANETs) work together. These networks struggle with dependability, energy consumption, and collision avoidance. The goal of this research project is to establish a new, dependable MANET routing model, where the selection of predictor nodes comes first. For selecting predictor nodes based on factors like distance, security (risk), Receiver Signal Strength Indicator (RSSI), Packet Delivery Ratio (PDR), and energy, the adaptive weighted clustering algorithm (AWCA) is used in this case. Using the Interfused Slime and Battle Royale Optimization with Arithmetic Crossover (IS&BRO–AC) model, the node with the lower weight is selected as the Cluster Head (CH). Additionally, mobility prediction is carried out, in which the node mobility is forecast using Improved Long Short Term Memory (LSTM) while taking distance and Receiver Signal Strength Indicator (RSSI) into account. Based on the forecast, trustworthy data transfer is implemented, ensuring more accurate and dependable MANET routing. The examination of RSSI, PDR, and other metrics is completed at the end.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81102721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective secure aware workflow scheduling algorithm in cloud computing based on hybrid optimization algorithm 云计算中基于混合优化算法的多目标安全感知工作流调度算法
IF 0.3 Q3 Computer Science Pub Date : 2023-07-27 DOI: 10.3233/web-220094
G. Narendrababu Reddy, S. Phani Kumar
Cloud computing provides the on-demand service of the user with the use of distributed physical machines, in which security has become a challenging factor while performing various tasks. Several methods were developed for the cloud computing workflow scheduling based on optimal resource allocation; still, the security consideration and efficient allocation of the workflow are challenging. Hence, this research introduces a hybrid optimization algorithm based on multi-objective workflow scheduling in the cloud computing environment. The Regressive Whale Water Tasmanian Devil Optimization (RWWTDO) is proposed for the optimal workflow scheduling based on the multi-objective fitness function with nine various factors, like Predicted energy, Quality of service (QoS), Resource utilization, Actual task running time, Bandwidth utilization, Memory capacity, Make span equivalent of the total cost, Task priority, and Trust. Besides, secure data transmission is employed using the triple data encryption standard (3DES) to acquire enhanced security for workflow scheduling. The method’s performance is evaluated using the resource utilization, predicted energy, task scheduling cost, and task scheduling time and acquired the values of 1.00000, 0.16587, 0.00041, and 0.00314, respectively.
云计算通过使用分布式物理机器为用户提供按需服务,在执行各种任务时,安全性已成为一个具有挑战性的因素。提出了几种基于资源优化分配的云计算工作流调度方法;但是,安全性考虑和工作流的有效分配仍然具有挑战性。因此,本研究引入了一种基于云计算环境下多目标工作流调度的混合优化算法。针对预测能量、服务质量(QoS)、资源利用率、实际任务运行时间、带宽利用率、内存容量、总成本的Make span等效量、任务优先级和信任等9个因素,提出了基于多目标适应度函数的回归鲸水塔斯马尼亚魔鬼优化算法(RWWTDO)。此外,采用三层数据加密标准(3DES)进行数据安全传输,增强了工作流调度的安全性。利用资源利用率、预测能量、任务调度成本和任务调度时间对该方法的性能进行评价,得到的值分别为1.00000、0.16587、0.00041和0.00314。
{"title":"Multi-objective secure aware workflow scheduling algorithm in cloud computing based on hybrid optimization algorithm","authors":"G. Narendrababu Reddy, S. Phani Kumar","doi":"10.3233/web-220094","DOIUrl":"https://doi.org/10.3233/web-220094","url":null,"abstract":"Cloud computing provides the on-demand service of the user with the use of distributed physical machines, in which security has become a challenging factor while performing various tasks. Several methods were developed for the cloud computing workflow scheduling based on optimal resource allocation; still, the security consideration and efficient allocation of the workflow are challenging. Hence, this research introduces a hybrid optimization algorithm based on multi-objective workflow scheduling in the cloud computing environment. The Regressive Whale Water Tasmanian Devil Optimization (RWWTDO) is proposed for the optimal workflow scheduling based on the multi-objective fitness function with nine various factors, like Predicted energy, Quality of service (QoS), Resource utilization, Actual task running time, Bandwidth utilization, Memory capacity, Make span equivalent of the total cost, Task priority, and Trust. Besides, secure data transmission is employed using the triple data encryption standard (3DES) to acquire enhanced security for workflow scheduling. The method’s performance is evaluated using the resource utilization, predicted energy, task scheduling cost, and task scheduling time and acquired the values of 1.00000, 0.16587, 0.00041, and 0.00314, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver cancer classification via deep hybrid model from CT image with improved texture feature set and fuzzy clustering based segmentation 基于改进纹理特征集和模糊聚类分割的CT图像深度混合模型肝癌分类
IF 0.3 Q3 Computer Science Pub Date : 2023-07-19 DOI: 10.3233/web-230042
Vinnakota Sai Durga Tejaswi, V. Rachapudi
One of the leading causes of death for people worldwide is liver cancer. Manually identifying the cancer tissue in the current situation is a challenging and time-consuming task. Assessing the tumor load, planning therapies, making predictions, and tracking the clinical response can all be done using the segmentation of liver lesions in Computed Tomography (CT) scans. In this paper we propose a new technique for liver cancer classification with CT image. This method consists of four stages like pre-processing, segmentation, feature extraction and classification. In the initial stage the input image will be pre processed for the quality enhancement. This preprocessed output will be subjected to the segmentation phase; here improved deep fuzzy clustering technique will be applied for image segmentation. Subsequently, the segmented image will be the input of the feature extraction phase, where the extracted features are named as Improved Gabor Transitional Pattern, Grey-Level Co-occurrence Matrix (GLCM), Statistical features and Convolutional Neural Network (CNN) based feature. Finally the extracted features are subjected to the classification stage, here the two types of classifiers used for classification that is Bi-GRU and Deep Maxout. In this phase we will apply the Crossover mutated COOT optimization (CMCO) for tuning the weights, So that we will improve the quality of the image. This proposed technique, present the best accuracy of disease identification. The CMCO gained the accuracy of 95.58%, which is preferable than AO = 92.16%, COA = 89.38%, TSA = 88.05%, AOA = 92.05% and COOT = 91.95%, respectively.
肝癌是世界范围内人们死亡的主要原因之一。在目前的情况下,人工识别癌症组织是一项具有挑战性和耗时的任务。利用计算机断层扫描(CT)对肝脏病变进行分割,可以评估肿瘤负荷、计划治疗、做出预测和跟踪临床反应。本文提出了一种基于CT图像的肝癌分类新方法。该方法包括预处理、分割、特征提取和分类四个阶段。在初始阶段,输入图像将被预处理以增强质量。这个预处理后的输出将经过分割阶段;本文将改进的深度模糊聚类技术应用于图像分割。随后,将分割后的图像作为特征提取阶段的输入,提取的特征被命名为基于改进Gabor过渡模式、灰度共生矩阵(GLCM)、统计特征和卷积神经网络(CNN)的特征。最后对提取的特征进行分类阶段,这里使用两种分类器进行分类,即Bi-GRU和Deep Maxout。在这个阶段,我们将应用交叉突变COOT优化(CMCO)来调整权重,从而提高图像的质量。该方法对疾病的鉴别具有较高的准确性。CMCO的准确率为95.58%,分别优于AO = 92.16%、COA = 89.38%、TSA = 88.05%、AOA = 92.05%和COOT = 91.95%。
{"title":"Liver cancer classification via deep hybrid model from CT image with improved texture feature set and fuzzy clustering based segmentation","authors":"Vinnakota Sai Durga Tejaswi, V. Rachapudi","doi":"10.3233/web-230042","DOIUrl":"https://doi.org/10.3233/web-230042","url":null,"abstract":"One of the leading causes of death for people worldwide is liver cancer. Manually identifying the cancer tissue in the current situation is a challenging and time-consuming task. Assessing the tumor load, planning therapies, making predictions, and tracking the clinical response can all be done using the segmentation of liver lesions in Computed Tomography (CT) scans. In this paper we propose a new technique for liver cancer classification with CT image. This method consists of four stages like pre-processing, segmentation, feature extraction and classification. In the initial stage the input image will be pre processed for the quality enhancement. This preprocessed output will be subjected to the segmentation phase; here improved deep fuzzy clustering technique will be applied for image segmentation. Subsequently, the segmented image will be the input of the feature extraction phase, where the extracted features are named as Improved Gabor Transitional Pattern, Grey-Level Co-occurrence Matrix (GLCM), Statistical features and Convolutional Neural Network (CNN) based feature. Finally the extracted features are subjected to the classification stage, here the two types of classifiers used for classification that is Bi-GRU and Deep Maxout. In this phase we will apply the Crossover mutated COOT optimization (CMCO) for tuning the weights, So that we will improve the quality of the image. This proposed technique, present the best accuracy of disease identification. The CMCO gained the accuracy of 95.58%, which is preferable than AO = 92.16%, COA = 89.38%, TSA = 88.05%, AOA = 92.05% and COOT = 91.95%, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86676846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An extensive review on crop/weed classification models 作物/杂草分类模型综述
IF 0.3 Q3 Computer Science Pub Date : 2023-07-18 DOI: 10.3233/web-220115
Bikramaditya Panda, M. Mishra, B. P. Mishra, A. K. Tiwari
Crop and weed identification remains a challenge for unmanned weed control. Due to the small range between the chopping tine and the important crop location, weed identification against the annual crops must be extremely exact. This study endeavor included a literature evaluation, which included the most important 50 research publications in IEEE, Science Direct, and Springer journals. From 2012 until 2022, all of these papers are gathered. In fact, the diagnosis steps include: preprocessing, feature extraction, and crop/weed classification. This research analyzes the 50 research articles in several aspects, such as the dataset used for evaluations, different strategies used for pre-processing, feature extraction, and classification to get a clear picture of them. Furthermore, each work’s high performance in accuracy, sensitivity, and precision is demonstrated. Furthermore, the present hurdles in crop and weed identification are described, which serve as a benchmark for upcoming researchers.
作物和杂草的识别仍然是无人杂草控制的挑战。由于刈割时间和重要作物位置之间的范围很小,因此针对一年生作物的杂草鉴定必须非常准确。这项研究包括文献评估,其中包括IEEE、Science Direct和Springer期刊上最重要的50篇研究论文。从2012年到2022年,所有这些论文都被收集。实际上,诊断步骤包括:预处理、特征提取和作物/杂草分类。本研究从评价的数据集、预处理的不同策略、特征提取、分类等几个方面对这50篇研究论文进行了分析,得到了一个清晰的图景。此外,还展示了每个工作在准确性,灵敏度和精度方面的高性能。此外,还描述了目前在作物和杂草识别方面的障碍,为未来的研究人员提供了一个基准。
{"title":"An extensive review on crop/weed classification models","authors":"Bikramaditya Panda, M. Mishra, B. P. Mishra, A. K. Tiwari","doi":"10.3233/web-220115","DOIUrl":"https://doi.org/10.3233/web-220115","url":null,"abstract":"Crop and weed identification remains a challenge for unmanned weed control. Due to the small range between the chopping tine and the important crop location, weed identification against the annual crops must be extremely exact. This study endeavor included a literature evaluation, which included the most important 50 research publications in IEEE, Science Direct, and Springer journals. From 2012 until 2022, all of these papers are gathered. In fact, the diagnosis steps include: preprocessing, feature extraction, and crop/weed classification. This research analyzes the 50 research articles in several aspects, such as the dataset used for evaluations, different strategies used for pre-processing, feature extraction, and classification to get a clear picture of them. Furthermore, each work’s high performance in accuracy, sensitivity, and precision is demonstrated. Furthermore, the present hurdles in crop and weed identification are described, which serve as a benchmark for upcoming researchers.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90096999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMoGW-based deep CNN: Plant disease detection and classification using SMoGW-deep CNN classifier 基于smogw的深度CNN:使用SMoGW-deep CNN分类器进行植物病害检测和分类
IF 0.3 Q3 Computer Science Pub Date : 2023-06-28 DOI: 10.3233/web-230015
A. Pahurkar, Ravindra M. Deshmukh
Diagnosing plant disease is a major role to reduce adequate losses in yield production, which further leads to economic losses. The various disease control measures are accessible without a proper diagnosis of the disease which results in a waste of expenses and time. The diagnosis of disease using images leads to unsatisfactory results in the prevalent methods due to the image clarity. It is mainly caused by the worst performance of the existing pre-trained image classifiers. This issue can be controlled by the SMoGW-deep convolutional neural network (deep CNN) classifier for the accurate and precise classification of plant leaf disease. The developed method transforms the poor-quality captured images into high quality by the preprocessing technique. The preprocessed input images contain pixels on their dimension and also the value of the threshold is analyzed by the Otsu method by which the particular disease-affected region is extracted based on the image pixels. The region of interest is separated from the other parts of the input leaf image using the K-means segmentation technique. The stored features in the feature vector are fed forward to the deep CNN classifier for training and are optimized by the SMoGW optimization approach. The experiments are done and achieved an accuracy of 94.5% sensitivity of 94.525%, specificity of 94.6%, precision of 95% with 90% of training data and under K-fold training with 95% of accuracy, 95% of sensitivity, 94.1% of specificity, and 92.1% of precession is achieved for the SMoGW-optimized classifier approach that is superior to the prevalent techniques for disease classification and detection. The potential, as well as the capability of the proposed method, is experimentally demonstrated for plant leaf disease classification and identification.
诊断植物病害是减少产量损失的重要手段,而产量损失又进一步导致经济损失。在没有对疾病进行适当诊断的情况下,可以采取各种疾病控制措施,这导致了费用和时间的浪费。由于图像清晰度的限制,目前常用的疾病诊断方法对疾病的诊断效果并不理想。这主要是由于现有的预训练图像分类器性能最差造成的。这个问题可以通过SMoGW-deep convolutional neural network (deep CNN)分类器来控制,实现对植物叶片病害的精准分类。该方法通过预处理技术将捕获的低质量图像转化为高质量图像。预处理后的输入图像在其维度上包含像素,并且通过Otsu方法分析阈值,通过该方法根据图像像素提取特定的疾病影响区域。使用k均值分割技术将感兴趣的区域与输入叶子图像的其他部分分离开来。将特征向量中存储的特征前馈给深度CNN分类器进行训练,并通过SMoGW优化方法进行优化。实验结果表明,在90%的训练数据下,smogw优化分类器的准确率为94.5%,灵敏度为94.525%,特异性为94.6%,精密度为95%,K-fold训练下,准确率为95%,灵敏度为95%,特异性为94.1%,进动率为92.1%,优于目前流行的疾病分类和检测技术。实验证明了该方法在植物叶片病害分类和鉴定方面的潜力和能力。
{"title":"SMoGW-based deep CNN: Plant disease detection and classification using SMoGW-deep CNN classifier","authors":"A. Pahurkar, Ravindra M. Deshmukh","doi":"10.3233/web-230015","DOIUrl":"https://doi.org/10.3233/web-230015","url":null,"abstract":"Diagnosing plant disease is a major role to reduce adequate losses in yield production, which further leads to economic losses. The various disease control measures are accessible without a proper diagnosis of the disease which results in a waste of expenses and time. The diagnosis of disease using images leads to unsatisfactory results in the prevalent methods due to the image clarity. It is mainly caused by the worst performance of the existing pre-trained image classifiers. This issue can be controlled by the SMoGW-deep convolutional neural network (deep CNN) classifier for the accurate and precise classification of plant leaf disease. The developed method transforms the poor-quality captured images into high quality by the preprocessing technique. The preprocessed input images contain pixels on their dimension and also the value of the threshold is analyzed by the Otsu method by which the particular disease-affected region is extracted based on the image pixels. The region of interest is separated from the other parts of the input leaf image using the K-means segmentation technique. The stored features in the feature vector are fed forward to the deep CNN classifier for training and are optimized by the SMoGW optimization approach. The experiments are done and achieved an accuracy of 94.5% sensitivity of 94.525%, specificity of 94.6%, precision of 95% with 90% of training data and under K-fold training with 95% of accuracy, 95% of sensitivity, 94.1% of specificity, and 92.1% of precession is achieved for the SMoGW-optimized classifier approach that is superior to the prevalent techniques for disease classification and detection. The potential, as well as the capability of the proposed method, is experimentally demonstrated for plant leaf disease classification and identification.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78905564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bias study and an unbiased deep neural network for recommender systems 推荐系统的偏差研究与无偏深度神经网络
IF 0.3 Q3 Computer Science Pub Date : 2023-06-23 DOI: 10.3233/web-230036
Li He, Jiashu Zhao, Yulong Gu, Mitchell Elbaz, Zhuoye Ding
User feedback data (e.g., clicks, dwell time in the product detail page) have been incorporated in the training process of many ranking models for better performance. Such approaches are widely used in many ranking applications, including search and recommendation. Recently, the inherent biases in user feedback data have been studied, which indicates how the users’ behaviors can be affected by factors other than relevancy. By identifying and removing these biases, the ranking models can be further improved. Researchers have developed a variety of debiasing methods on different bias factors. Most of them only focus on one type of bias and pay little attention to different types of bias from a unified perspective. In this paper, we conduct a comprehensive study of bias focusing on the application of ranking problems in recommender systems which is highly important for the research of web intelligence. Then, we share our experiences derived from designing and optimizing unbiased models to improve feeds recommendation. To uncover the effects of biases and achieve better ranking performance, we propose several unbiased models and compare with state-of-the-art models. We conduct extensive offline experiments on real datasets and validate the effectiveness of our method by performing online A/B testing in a real-world recommender system.
用户反馈数据(例如,点击,产品详细页面停留时间)已被纳入许多排名模型的训练过程中,以获得更好的性能。这种方法被广泛用于许多排名应用程序,包括搜索和推荐。最近,人们对用户反馈数据中的固有偏差进行了研究,这表明用户的行为如何受到相关性以外的因素的影响。通过识别和消除这些偏差,可以进一步改进排名模型。研究人员针对不同的偏倚因素开发了各种各样的去偏方法。他们大多只关注一种类型的偏见,很少从统一的角度关注不同类型的偏见。本文针对排序问题在推荐系统中的应用进行了全面的偏见研究,这对web智能的研究具有重要意义。然后,我们分享了设计和优化无偏模型以提高饲料推荐的经验。为了揭示偏差的影响并获得更好的排名性能,我们提出了几个无偏模型,并与最先进的模型进行了比较。我们在真实数据集上进行了大量的离线实验,并通过在现实世界的推荐系统中进行在线A/B测试来验证我们方法的有效性。
{"title":"A bias study and an unbiased deep neural network for recommender systems","authors":"Li He, Jiashu Zhao, Yulong Gu, Mitchell Elbaz, Zhuoye Ding","doi":"10.3233/web-230036","DOIUrl":"https://doi.org/10.3233/web-230036","url":null,"abstract":"User feedback data (e.g., clicks, dwell time in the product detail page) have been incorporated in the training process of many ranking models for better performance. Such approaches are widely used in many ranking applications, including search and recommendation. Recently, the inherent biases in user feedback data have been studied, which indicates how the users’ behaviors can be affected by factors other than relevancy. By identifying and removing these biases, the ranking models can be further improved. Researchers have developed a variety of debiasing methods on different bias factors. Most of them only focus on one type of bias and pay little attention to different types of bias from a unified perspective. In this paper, we conduct a comprehensive study of bias focusing on the application of ranking problems in recommender systems which is highly important for the research of web intelligence. Then, we share our experiences derived from designing and optimizing unbiased models to improve feeds recommendation. To uncover the effects of biases and achieve better ranking performance, we propose several unbiased models and compare with state-of-the-art models. We conduct extensive offline experiments on real datasets and validate the effectiveness of our method by performing online A/B testing in a real-world recommender system.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90613493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Over comparative study of text summarization techniques based on graph neural networks 基于图神经网络的文本摘要技术比较研究
IF 0.3 Q3 Computer Science Pub Date : 2023-06-22 DOI: 10.3233/web-230014
Samina Mulla, N. Shaikh
Due to the enormous content of text available online through emails, social media, and news articles, it has become complicated to summarize the textual information from multiple documents. Text summarization automatically creates a comprehensive description of the document that retains its informative contents through the keywords, where Multi-Document Summarization (MDS) is a productive tool for data accumulation that creates a concise and informative summary from the documents. In order to extract the relevant information from the documents, Graph neural networks (GNNs) is the neural structure that detains the interrelation of the graph by progressing the messages between the graphical nodes. In the current years, the advanced version of GNNs, such as graph attention network (GAN), graph recurrent network, and graph convolutional network (GCN) provides a remarkable performance in text summarization with the advantage of deep learning techniques. Hence, in this survey, graph approaches for text summarization has been analyzed and discussed, where the recent text summarization model based on Deep learning techniques are highlighted. Further, the article provides the taxonomy to abstract the design pattern of Neural Networks and conducts a comprehensive of the existing text summarization model. Finally, the review article enlists the future direction of the researcher, which would motivate the enthusiastic and novel contributions in text summarizations.
由于通过电子邮件、社交媒体和新闻文章在线提供的大量文本内容,从多个文档中总结文本信息变得非常复杂。文本摘要自动创建文档的全面描述,通过关键字保留其信息内容,其中多文档摘要(Multi-Document summarization, MDS)是一种用于数据积累的高效工具,它从文档中创建简洁且信息丰富的摘要。为了从文档中提取相关信息,图神经网络(gnn)是一种通过在图节点之间推进消息来保存图的相互关系的神经结构。近年来,图注意网络(GAN)、图循环网络(recurrent network)、图卷积网络(GCN)等gnn的高级版本利用深度学习技术的优势,在文本摘要方面表现优异。因此,在本调查中,分析和讨论了用于文本摘要的图形方法,其中强调了最近基于深度学习技术的文本摘要模型。在此基础上,提出了神经网络设计模式抽象的分类方法,并对现有的文本摘要模型进行了综合。最后,综述文章提出了研究者未来的研究方向,这将激发文本摘要的热情和新颖的贡献。
{"title":"Over comparative study of text summarization techniques based on graph neural networks","authors":"Samina Mulla, N. Shaikh","doi":"10.3233/web-230014","DOIUrl":"https://doi.org/10.3233/web-230014","url":null,"abstract":"Due to the enormous content of text available online through emails, social media, and news articles, it has become complicated to summarize the textual information from multiple documents. Text summarization automatically creates a comprehensive description of the document that retains its informative contents through the keywords, where Multi-Document Summarization (MDS) is a productive tool for data accumulation that creates a concise and informative summary from the documents. In order to extract the relevant information from the documents, Graph neural networks (GNNs) is the neural structure that detains the interrelation of the graph by progressing the messages between the graphical nodes. In the current years, the advanced version of GNNs, such as graph attention network (GAN), graph recurrent network, and graph convolutional network (GCN) provides a remarkable performance in text summarization with the advantage of deep learning techniques. Hence, in this survey, graph approaches for text summarization has been analyzed and discussed, where the recent text summarization model based on Deep learning techniques are highlighted. Further, the article provides the taxonomy to abstract the design pattern of Neural Networks and conducts a comprehensive of the existing text summarization model. Finally, the review article enlists the future direction of the researcher, which would motivate the enthusiastic and novel contributions in text summarizations.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90897084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid deep model for brain age prediction in MRI with improved chi-square based selected features 基于改进卡方选择特征的MRI脑年龄预测混合深度模型
IF 0.3 Q3 Computer Science Pub Date : 2023-06-22 DOI: 10.3233/web-230060
Vishnupriya G.S, S. Rajakumari
Ageing and its related health conditions bring many challenges not only to individuals but also to society. Various MRI techniques are defined for the early detection of age-related diseases. Researchers continue the prediction with the involvement of different strategies. In that manner, this research intends to propose a new brain age prediction model under the processing of certain steps like preprocessing, feature extraction, feature selection, and prediction. The initial step is preprocessing, where improved median filtering is proposed to reduce the noise in the image. After this, feature extraction takes place, where shape-based features, statistical features, and texture features are extracted. Particularly, Improved LGTrP features are extracted. However, the curse of dimensionality becomes a serious issue in this aspect that shrinks the efficiency of the prediction level. According to the “curse of dimensionality,” the number of samples required to estimate any function accurately increases exponentially as the number of input variables increases. Hence, a feature selection model with improvement has been introduced in this paper termed an improved Chi-square. Finally, for prediction purposes, a Hybrid classifier is introduced by combining the models like Bi-GRU and DBN, respectively. In order to enhance the effectiveness of the hybrid method, Upgraded Blue Monkey Optimization with Improvised Evaluation (UBMOIE) is introduced as the training system by tuning the optimal weights in both classifiers. Finally, the performance of the suggested UBMIOE-based brain age prediction method was assessed over the other schemes to various metrics.
老龄化及其相关的健康状况不仅给个人也给社会带来了许多挑战。各种MRI技术被定义为年龄相关疾病的早期检测。研究人员继续使用不同的策略进行预测。因此,本研究拟提出一种经过预处理、特征提取、特征选择、预测等步骤处理的新的脑年龄预测模型。第一步是预处理,提出改进的中值滤波来降低图像中的噪声。在此之后,进行特征提取,提取基于形状的特征、统计特征和纹理特征。特别地,提取了改进的LGTrP特征。然而,在这方面,维数的诅咒成为一个严重的问题,降低了预测水平的效率。根据“维数诅咒”,准确估计任何函数所需的样本数量随着输入变量数量的增加呈指数增长。因此,本文提出了一种改进的特征选择模型,称为改进的卡方模型。最后,结合Bi-GRU和DBN模型,引入混合分类器进行预测。为了提高混合方法的有效性,通过对两个分类器的最优权值进行调整,引入了带临时评估的升级蓝猴优化(UBMOIE)作为训练系统。最后,对基于ubmioe的脑年龄预测方法的性能与其他方案进行了各种指标的评估。
{"title":"Hybrid deep model for brain age prediction in MRI with improved chi-square based selected features","authors":"Vishnupriya G.S, S. Rajakumari","doi":"10.3233/web-230060","DOIUrl":"https://doi.org/10.3233/web-230060","url":null,"abstract":"Ageing and its related health conditions bring many challenges not only to individuals but also to society. Various MRI techniques are defined for the early detection of age-related diseases. Researchers continue the prediction with the involvement of different strategies. In that manner, this research intends to propose a new brain age prediction model under the processing of certain steps like preprocessing, feature extraction, feature selection, and prediction. The initial step is preprocessing, where improved median filtering is proposed to reduce the noise in the image. After this, feature extraction takes place, where shape-based features, statistical features, and texture features are extracted. Particularly, Improved LGTrP features are extracted. However, the curse of dimensionality becomes a serious issue in this aspect that shrinks the efficiency of the prediction level. According to the “curse of dimensionality,” the number of samples required to estimate any function accurately increases exponentially as the number of input variables increases. Hence, a feature selection model with improvement has been introduced in this paper termed an improved Chi-square. Finally, for prediction purposes, a Hybrid classifier is introduced by combining the models like Bi-GRU and DBN, respectively. In order to enhance the effectiveness of the hybrid method, Upgraded Blue Monkey Optimization with Improvised Evaluation (UBMOIE) is introduced as the training system by tuning the optimal weights in both classifiers. Finally, the performance of the suggested UBMIOE-based brain age prediction method was assessed over the other schemes to various metrics.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77393732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet of Things assisted Unmanned Aerial Vehicle for Pest Detection with Optimized Deep Learning Model 物联网辅助无人机害虫检测优化深度学习模型
IF 0.3 Q3 Computer Science Pub Date : 2023-06-22 DOI: 10.3233/web-230062
Vijayalakshmi G, Radhika Y
IoT technologies & UAVs are frequently utilized in ecological monitoring areas. Unmanned Aerial Vehicles (UAVs) & IoT in farming technology can evaluate crop disease & pest incidence from the ground’s micro & macro aspects, correspondingly. UAVs could capture images of farms using a spectral camera system, and these images are been used to examine the presence of agricultural pests and diseases. In this research work, a novel IoT- assisted UAV- based pest detection with Arithmetic Crossover based Black Widow Optimization-Convolutional Neural Network (ACBWO-CNN) model is developed in the field of agriculture. Cloud computing mechanism is used for monitoring and discovering the pest during crop production by using UAVs. The need for this method is to provide data centers, so there is a necessary amount of memory storage in addition to the processing of several images. Initially, the collected input image by the UAV is assumed on handling the via-IoT-cloud server, from which the pest identification takes place. The pest detection unit will be designed with three major phases: (a) background &foreground Segmentation, (b) Feature Extraction & (c) Classification. In the foreground and background Segmentation phase, the K-means clustering will be utilized for segmenting the pest images. From the segmented images, it extracts the features including Local Binary Pattern (LBP) &improved Local Vector Pattern (LVP) features. With these features, the optimized CNN classifier in the classification phase will be trained for the identification of pests in crops. Since the final detection outcome is from the Convolutional Neural Network (CNN); its weights are fine-tuned through the ACBWO approach. Thus, the output from optimized CNN will portray the type of pest identified in the field. This method’s performance is compared to other existing methods concerning a few measures.
生态监测领域经常使用物联网技术和无人机。农业技术中的无人机和物联网可以相应地从地面的微观和宏观方面评估作物病虫害的发生情况。无人机可以使用光谱相机系统捕捉农场的图像,这些图像被用来检查农业害虫和疾病的存在。在本研究中,提出了一种基于基于算法交叉的黑寡妇优化卷积神经网络(ACBWO-CNN)的新型物联网辅助无人机害虫检测方法。采用云计算机制,利用无人机对作物生产过程中的害虫进行监测和发现。这种方法的需要是提供数据中心,因此除了处理若干图像外,还有必要的内存存储量。最初,无人机收集的输入图像被假定为通过物联网云服务器进行处理,从该服务器进行害虫识别。害虫检测单元的设计将分为三个主要阶段:(a)背景和前景分割,(b)特征提取和(c)分类。在前景和背景分割阶段,将利用k均值聚类对害虫图像进行分割。从分割后的图像中提取局部二值模式(LBP)和改进的局部向量模式(LVP)特征。利用这些特征,在分类阶段训练优化后的CNN分类器,用于农作物害虫的识别。由于最终的检测结果来自卷积神经网络(CNN);其权重通过ACBWO方法进行微调。因此,优化后的CNN输出将描绘出现场识别的害虫类型。并在几个指标上与现有方法进行了性能比较。
{"title":"Internet of Things assisted Unmanned Aerial Vehicle for Pest Detection with Optimized Deep Learning Model","authors":"Vijayalakshmi G, Radhika Y","doi":"10.3233/web-230062","DOIUrl":"https://doi.org/10.3233/web-230062","url":null,"abstract":"IoT technologies & UAVs are frequently utilized in ecological monitoring areas. Unmanned Aerial Vehicles (UAVs) & IoT in farming technology can evaluate crop disease & pest incidence from the ground’s micro & macro aspects, correspondingly. UAVs could capture images of farms using a spectral camera system, and these images are been used to examine the presence of agricultural pests and diseases. In this research work, a novel IoT- assisted UAV- based pest detection with Arithmetic Crossover based Black Widow Optimization-Convolutional Neural Network (ACBWO-CNN) model is developed in the field of agriculture. Cloud computing mechanism is used for monitoring and discovering the pest during crop production by using UAVs. The need for this method is to provide data centers, so there is a necessary amount of memory storage in addition to the processing of several images. Initially, the collected input image by the UAV is assumed on handling the via-IoT-cloud server, from which the pest identification takes place. The pest detection unit will be designed with three major phases: (a) background &foreground Segmentation, (b) Feature Extraction & (c) Classification. In the foreground and background Segmentation phase, the K-means clustering will be utilized for segmenting the pest images. From the segmented images, it extracts the features including Local Binary Pattern (LBP) &improved Local Vector Pattern (LVP) features. With these features, the optimized CNN classifier in the classification phase will be trained for the identification of pests in crops. Since the final detection outcome is from the Convolutional Neural Network (CNN); its weights are fine-tuned through the ACBWO approach. Thus, the output from optimized CNN will portray the type of pest identified in the field. This method’s performance is compared to other existing methods concerning a few measures.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91130818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Web Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1