Ageing and its related health conditions bring many challenges not only to individuals but also to society. Various MRI techniques are defined for the early detection of age-related diseases. Researchers continue the prediction with the involvement of different strategies. In that manner, this research intends to propose a new brain age prediction model under the processing of certain steps like preprocessing, feature extraction, feature selection, and prediction. The initial step is preprocessing, where improved median filtering is proposed to reduce the noise in the image. After this, feature extraction takes place, where shape-based features, statistical features, and texture features are extracted. Particularly, Improved LGTrP features are extracted. However, the curse of dimensionality becomes a serious issue in this aspect that shrinks the efficiency of the prediction level. According to the “curse of dimensionality,” the number of samples required to estimate any function accurately increases exponentially as the number of input variables increases. Hence, a feature selection model with improvement has been introduced in this paper termed an improved Chi-square. Finally, for prediction purposes, a Hybrid classifier is introduced by combining the models like Bi-GRU and DBN, respectively. In order to enhance the effectiveness of the hybrid method, Upgraded Blue Monkey Optimization with Improvised Evaluation (UBMOIE) is introduced as the training system by tuning the optimal weights in both classifiers. Finally, the performance of the suggested UBMIOE-based brain age prediction method was assessed over the other schemes to various metrics.
{"title":"Hybrid deep model for brain age prediction in MRI with improved chi-square based selected features","authors":"Vishnupriya G.S, S. Rajakumari","doi":"10.3233/web-230060","DOIUrl":"https://doi.org/10.3233/web-230060","url":null,"abstract":"Ageing and its related health conditions bring many challenges not only to individuals but also to society. Various MRI techniques are defined for the early detection of age-related diseases. Researchers continue the prediction with the involvement of different strategies. In that manner, this research intends to propose a new brain age prediction model under the processing of certain steps like preprocessing, feature extraction, feature selection, and prediction. The initial step is preprocessing, where improved median filtering is proposed to reduce the noise in the image. After this, feature extraction takes place, where shape-based features, statistical features, and texture features are extracted. Particularly, Improved LGTrP features are extracted. However, the curse of dimensionality becomes a serious issue in this aspect that shrinks the efficiency of the prediction level. According to the “curse of dimensionality,” the number of samples required to estimate any function accurately increases exponentially as the number of input variables increases. Hence, a feature selection model with improvement has been introduced in this paper termed an improved Chi-square. Finally, for prediction purposes, a Hybrid classifier is introduced by combining the models like Bi-GRU and DBN, respectively. In order to enhance the effectiveness of the hybrid method, Upgraded Blue Monkey Optimization with Improvised Evaluation (UBMOIE) is introduced as the training system by tuning the optimal weights in both classifiers. Finally, the performance of the suggested UBMIOE-based brain age prediction method was assessed over the other schemes to various metrics.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"70 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77393732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IoT technologies & UAVs are frequently utilized in ecological monitoring areas. Unmanned Aerial Vehicles (UAVs) & IoT in farming technology can evaluate crop disease & pest incidence from the ground’s micro & macro aspects, correspondingly. UAVs could capture images of farms using a spectral camera system, and these images are been used to examine the presence of agricultural pests and diseases. In this research work, a novel IoT- assisted UAV- based pest detection with Arithmetic Crossover based Black Widow Optimization-Convolutional Neural Network (ACBWO-CNN) model is developed in the field of agriculture. Cloud computing mechanism is used for monitoring and discovering the pest during crop production by using UAVs. The need for this method is to provide data centers, so there is a necessary amount of memory storage in addition to the processing of several images. Initially, the collected input image by the UAV is assumed on handling the via-IoT-cloud server, from which the pest identification takes place. The pest detection unit will be designed with three major phases: (a) background &foreground Segmentation, (b) Feature Extraction & (c) Classification. In the foreground and background Segmentation phase, the K-means clustering will be utilized for segmenting the pest images. From the segmented images, it extracts the features including Local Binary Pattern (LBP) &improved Local Vector Pattern (LVP) features. With these features, the optimized CNN classifier in the classification phase will be trained for the identification of pests in crops. Since the final detection outcome is from the Convolutional Neural Network (CNN); its weights are fine-tuned through the ACBWO approach. Thus, the output from optimized CNN will portray the type of pest identified in the field. This method’s performance is compared to other existing methods concerning a few measures.
{"title":"Internet of Things assisted Unmanned Aerial Vehicle for Pest Detection with Optimized Deep Learning Model","authors":"Vijayalakshmi G, Radhika Y","doi":"10.3233/web-230062","DOIUrl":"https://doi.org/10.3233/web-230062","url":null,"abstract":"IoT technologies & UAVs are frequently utilized in ecological monitoring areas. Unmanned Aerial Vehicles (UAVs) & IoT in farming technology can evaluate crop disease & pest incidence from the ground’s micro & macro aspects, correspondingly. UAVs could capture images of farms using a spectral camera system, and these images are been used to examine the presence of agricultural pests and diseases. In this research work, a novel IoT- assisted UAV- based pest detection with Arithmetic Crossover based Black Widow Optimization-Convolutional Neural Network (ACBWO-CNN) model is developed in the field of agriculture. Cloud computing mechanism is used for monitoring and discovering the pest during crop production by using UAVs. The need for this method is to provide data centers, so there is a necessary amount of memory storage in addition to the processing of several images. Initially, the collected input image by the UAV is assumed on handling the via-IoT-cloud server, from which the pest identification takes place. The pest detection unit will be designed with three major phases: (a) background &foreground Segmentation, (b) Feature Extraction & (c) Classification. In the foreground and background Segmentation phase, the K-means clustering will be utilized for segmenting the pest images. From the segmented images, it extracts the features including Local Binary Pattern (LBP) &improved Local Vector Pattern (LVP) features. With these features, the optimized CNN classifier in the classification phase will be trained for the identification of pests in crops. Since the final detection outcome is from the Convolutional Neural Network (CNN); its weights are fine-tuned through the ACBWO approach. Thus, the output from optimized CNN will portray the type of pest identified in the field. This method’s performance is compared to other existing methods concerning a few measures.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"23 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91130818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Handoff management is the method in which the mobile node maintains its connection active when it shifts from location to other. The devastating success of mobile devices as well as wireless communications is emphasizing the requirement for the expansion of mobility-aware facilities. Moreover, the mobility of devices requires services adapting their behavior to abrupt context variations and being conscious of handoffs, which make an intermittent discontinuities and unpredictable delays. Thus, the heterogeneity of wireless network devices confuses the situation, since a dissimilar treatment of handoffs and context-awareness is essential for every solution. Hence, this paper introduced the Deep Q network-based Firefly Aquila Optimizer (DQN-FAO) for performing the handoff management. In order to establish the handoff management, the process of selecting network is very important. Here, the network is selected based on the devised FAO algorithm, which is the consolidation of Aquila Optimizer (AO) and Firefly algorithm (FA) that considers the metrics, such as Jitter, Handoff latency, and Received Signal Strength Indicator (RSSI) as fitness function. Moreover, the handover decision is taken by the DQN, where the hyper-parameters are tuned by the devised FAO algorithm. According to the hand over decision taken, the context aware video streaming is happened by adjusting the bit rate of the videos using network bandwidth. Besides, the devised scheme attained the superior performance based on the call drop, energy consumption, handover delay, throughput, handoff latency, and PSNR of 0.5122, 7.086 J, 10.54 ms, 13.17 Mbps, 93.80 ms and 46.89 dB.
{"title":"Firefly-Aquila optimized Deep Q network for handoff management in context aware video streaming-based heterogeneous wireless networks","authors":"Uttam P. Waghmode, U. Kolekar","doi":"10.3233/web-220090","DOIUrl":"https://doi.org/10.3233/web-220090","url":null,"abstract":"Handoff management is the method in which the mobile node maintains its connection active when it shifts from location to other. The devastating success of mobile devices as well as wireless communications is emphasizing the requirement for the expansion of mobility-aware facilities. Moreover, the mobility of devices requires services adapting their behavior to abrupt context variations and being conscious of handoffs, which make an intermittent discontinuities and unpredictable delays. Thus, the heterogeneity of wireless network devices confuses the situation, since a dissimilar treatment of handoffs and context-awareness is essential for every solution. Hence, this paper introduced the Deep Q network-based Firefly Aquila Optimizer (DQN-FAO) for performing the handoff management. In order to establish the handoff management, the process of selecting network is very important. Here, the network is selected based on the devised FAO algorithm, which is the consolidation of Aquila Optimizer (AO) and Firefly algorithm (FA) that considers the metrics, such as Jitter, Handoff latency, and Received Signal Strength Indicator (RSSI) as fitness function. Moreover, the handover decision is taken by the DQN, where the hyper-parameters are tuned by the devised FAO algorithm. According to the hand over decision taken, the context aware video streaming is happened by adjusting the bit rate of the videos using network bandwidth. Besides, the devised scheme attained the superior performance based on the call drop, energy consumption, handover delay, throughput, handoff latency, and PSNR of 0.5122, 7.086 J, 10.54 ms, 13.17 Mbps, 93.80 ms and 46.89 dB.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"74 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83988042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an automatic facial micro-expression recognition system (FMER) from video sequence. Identification and classification are performed on basic expressions: happy, surprise, fear, disgust, sadness, anger, and neutral states. The system integrates three main steps. The first step consists in face detection and tracking over three consecutive frames. In the second step, the facial contour extraction is performed on each frame to build Euclidean distance maps. The last task corresponds to the classification which is achieved with two methods; the SVM and using convolutional neural networks. Experimental evaluation of the proposed system for facial micro-expression identification is performed on the well-known databases (Chon and Kanade and CASME II), with six and seven facial expressions for each classification method.
提出了一种基于视频序列的面部微表情自动识别系统(FMER)。对基本表情进行识别和分类:快乐、惊讶、恐惧、厌恶、悲伤、愤怒和中性状态。该系统集成了三个主要步骤。第一步是对连续三帧的人脸进行检测和跟踪。第二步,对每一帧进行人脸轮廓提取,构建欧几里得距离图。最后一个任务对应的分类是用两种方法实现的;支持向量机和卷积神经网络。在知名数据库(Chon and Kanade和CASME II)上对所提出的面部微表情识别系统进行了实验评估,每种分类方法分别有6种和7种面部表情。
{"title":"Identification of micro expressions in a video sequence by Euclidean distance of the facial contours","authors":"S. Kherchaoui, A. Houacine","doi":"10.3233/web-220010","DOIUrl":"https://doi.org/10.3233/web-220010","url":null,"abstract":"This paper presents an automatic facial micro-expression recognition system (FMER) from video sequence. Identification and classification are performed on basic expressions: happy, surprise, fear, disgust, sadness, anger, and neutral states. The system integrates three main steps. The first step consists in face detection and tracking over three consecutive frames. In the second step, the facial contour extraction is performed on each frame to build Euclidean distance maps. The last task corresponds to the classification which is achieved with two methods; the SVM and using convolutional neural networks. Experimental evaluation of the proposed system for facial micro-expression identification is performed on the well-known databases (Chon and Kanade and CASME II), with six and seven facial expressions for each classification method.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"39 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83538756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The major intention of this research is to propose a secure authentication protocol for healthcare services in IoT based on a developed Q-Net-based secret key. Nine phases are included in the model. The sensor node, IoT device center, gateway node, and medical professional are the four entities involved in the key generation process. The designed model derived a mathematical model, which utilized hashing function, XOR, Chebyshev polynomial, passwords, encryption algorithm, secret keys, and other security operations for performing effective authentication. Here, the secret key is generated with the Deep Q-Net-based sub-key generation approach. The proposed method achieved the minimum computation time of 169xe9 ns, minimum memory usage is 71.38, and the obtained maximum detection rate is 0.957 for 64 key lengths. The secure authentication using the proposed method is accurate and improves the effectiveness of the system’s security.
{"title":"A secure authentication protocol for healthcare service in IoT with Q-net based secret key generation","authors":"Rupali Mahajan, Smita Chavan, Deepika Amol Ajalkar, Balshetwar SV, Prajakta Ajay Khadkikar","doi":"10.3233/web-220104","DOIUrl":"https://doi.org/10.3233/web-220104","url":null,"abstract":"The major intention of this research is to propose a secure authentication protocol for healthcare services in IoT based on a developed Q-Net-based secret key. Nine phases are included in the model. The sensor node, IoT device center, gateway node, and medical professional are the four entities involved in the key generation process. The designed model derived a mathematical model, which utilized hashing function, XOR, Chebyshev polynomial, passwords, encryption algorithm, secret keys, and other security operations for performing effective authentication. Here, the secret key is generated with the Deep Q-Net-based sub-key generation approach. The proposed method achieved the minimum computation time of 169xe9 ns, minimum memory usage is 71.38, and the obtained maximum detection rate is 0.957 for 64 key lengths. The secure authentication using the proposed method is accurate and improves the effectiveness of the system’s security.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135832801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The convergence of systems neuroscience and open science arouses great interest in the current brain big data era, highlighting the thinking capability of intelligent agents in handling multi-source knowledge, information and data across various levels of granularity. To realize such thinking-inspired brain computing during a brain investigation process, one of the major challenges is to find a holistic brain map that can model multi-dimensional variables of brain investigations across brain functions, experimental tasks, brain data and analytical methods synthetically. In this paper, we propose a context-enhanced graph learning method to fuse open knowledge from different sources, including: contextual information enrichment, structural knowledge fusion, and holistic graph learning. Such a method can enhance contextual learning of abstract concepts and relational learning between two concepts that have large gap from different dimensions. As a result, an extensible space, namely Thinking Space, is generated to represent holistic variables and their relations in a map, which currently contributes to the field of brain research for systematic brain computing. In the future, the Thinking Space coupled with the rapid development and spread of artificial intelligence generated content will be developed in more scenarios so as to promote global interactions of intelligence in the connected world.
{"title":"Thinking space generation using context-enhanced knowledge fusion for systematic brain computing","authors":"Hongzhi Kuai, Xiao‐Rong Tao, Ning Zhong","doi":"10.3233/web-220089","DOIUrl":"https://doi.org/10.3233/web-220089","url":null,"abstract":"The convergence of systems neuroscience and open science arouses great interest in the current brain big data era, highlighting the thinking capability of intelligent agents in handling multi-source knowledge, information and data across various levels of granularity. To realize such thinking-inspired brain computing during a brain investigation process, one of the major challenges is to find a holistic brain map that can model multi-dimensional variables of brain investigations across brain functions, experimental tasks, brain data and analytical methods synthetically. In this paper, we propose a context-enhanced graph learning method to fuse open knowledge from different sources, including: contextual information enrichment, structural knowledge fusion, and holistic graph learning. Such a method can enhance contextual learning of abstract concepts and relational learning between two concepts that have large gap from different dimensions. As a result, an extensible space, namely Thinking Space, is generated to represent holistic variables and their relations in a map, which currently contributes to the field of brain research for systematic brain computing. In the future, the Thinking Space coupled with the rapid development and spread of artificial intelligence generated content will be developed in more scenarios so as to promote global interactions of intelligence in the connected world.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"10 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81527466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technology-enabled learning has progressively grown for research areas with wide application of information and communication technologies for numerous standard-compliant Learning and Open Educational Resources. This provides formidable support to users for the selection of courses when they want to develop the course with available learning materials. But selecting a course via searching learning objects is an inherently complex operation having various repositories. In an E-learning Platform, many complexities arise due to various software tools and specification formats that hinder the success of the course. In this paper, many obstacles in the E-learning platform are eradicated by utilizing Fuzzy Local Information C-Means (FLICM) clustering with matrix factorization for the selection of courses. The dataset utilized in this work is E-Khool review data, from which an agglomerative matrix is generated. Here, the agglomerative matrix consists of the learner series matrix and course series matrix along with their binary matrix. After this process, course grouping is carried out by FLICM clustering with matrix factorization. Moreover, group course bilevel matching, followed by relevant learner retrieval and group user is done by Minkowski and Chebyshev distance. From this learner’s preferred course is retrieved and then a recommendation using matrix factorization is carried out. Finally, the course is recommended for the user based on maximum rating. Furthermore, the performance of developed FLICM_matrix factorization is achieved by performance metrics, like precision, recall, and f-measure with values 0.915, 0.850, and 0.882, accordingly.
{"title":"FLICM clustering with matrix factorization based course recommendation in an E-learning platform","authors":"A. Madhavi, A. Nagesh, A. Govardhan","doi":"10.3233/web-220121","DOIUrl":"https://doi.org/10.3233/web-220121","url":null,"abstract":"Technology-enabled learning has progressively grown for research areas with wide application of information and communication technologies for numerous standard-compliant Learning and Open Educational Resources. This provides formidable support to users for the selection of courses when they want to develop the course with available learning materials. But selecting a course via searching learning objects is an inherently complex operation having various repositories. In an E-learning Platform, many complexities arise due to various software tools and specification formats that hinder the success of the course. In this paper, many obstacles in the E-learning platform are eradicated by utilizing Fuzzy Local Information C-Means (FLICM) clustering with matrix factorization for the selection of courses. The dataset utilized in this work is E-Khool review data, from which an agglomerative matrix is generated. Here, the agglomerative matrix consists of the learner series matrix and course series matrix along with their binary matrix. After this process, course grouping is carried out by FLICM clustering with matrix factorization. Moreover, group course bilevel matching, followed by relevant learner retrieval and group user is done by Minkowski and Chebyshev distance. From this learner’s preferred course is retrieved and then a recommendation using matrix factorization is carried out. Finally, the course is recommended for the user based on maximum rating. Furthermore, the performance of developed FLICM_matrix factorization is achieved by performance metrics, like precision, recall, and f-measure with values 0.915, 0.850, and 0.882, accordingly.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"14 2 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72768457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large amounts of storage are required to store the recent massive influx of fresh photographs that are uploaded to the internet. Many analysts created expert image compression techniques during the preceding decades to increase compression rates and visual quality. In this research work, a unique image compression technique is established for Vector Quantization (VQ) with the K-means Linde–Buzo–Gary (KLBG) model. As a contribution, the codebooks are optimized with the aid of hybrid optimization algorithm. The projected KLBG model included three major phases: an encoder for image compression, a channel for transitions of the compressed image, and a decoder for image reconstruction. In the encoder section, the image vector creation, optimal codebook generation, and indexing mechanism are carried out. The input image enters the encoder stage, wherein it’s split into immediate and non-overlapping blocks. The proposed GMISM model hybridizes the concepts of the Genetic Algorithm (GA) and Slime Mould Optimization (SMO), respectively. Once, the optimal codebook is generated successfully, the indexing of the every vector with index number from index table takes place. These index numbers are sent through the channel to the receiver. The index table, optimal codebook and reconstructed picture are all included in the decoder portion. The received index table decodes the received indexed numbers. The optimally produced codebook at the receiver is identical to the codebook at the transmitter. The matching code words are allocated to the received index numbers, and the code words are organized so that the reconstructed picture is the same size as the input image. Eventually, a comparative assessment is performed to evaluate the proposed model. Especially, the computation time of the proposed model is 69.11%, 27.64%, 62.07%, 87.67%, 35.73%, 62.35%, and 14.11% better than the extant CSA, BFU-ROA, PSO, ROA, LA, SMO, and GA algorithms, respectively.
{"title":"Image compression based on vector quantization and optimized code-book design using Genetic Mating Influenced Slime Mould (GMISM) algorithm","authors":"Pratibha Chavan, B. Rani, M. Murugan, P. Chavan","doi":"10.3233/web-220050","DOIUrl":"https://doi.org/10.3233/web-220050","url":null,"abstract":"Large amounts of storage are required to store the recent massive influx of fresh photographs that are uploaded to the internet. Many analysts created expert image compression techniques during the preceding decades to increase compression rates and visual quality. In this research work, a unique image compression technique is established for Vector Quantization (VQ) with the K-means Linde–Buzo–Gary (KLBG) model. As a contribution, the codebooks are optimized with the aid of hybrid optimization algorithm. The projected KLBG model included three major phases: an encoder for image compression, a channel for transitions of the compressed image, and a decoder for image reconstruction. In the encoder section, the image vector creation, optimal codebook generation, and indexing mechanism are carried out. The input image enters the encoder stage, wherein it’s split into immediate and non-overlapping blocks. The proposed GMISM model hybridizes the concepts of the Genetic Algorithm (GA) and Slime Mould Optimization (SMO), respectively. Once, the optimal codebook is generated successfully, the indexing of the every vector with index number from index table takes place. These index numbers are sent through the channel to the receiver. The index table, optimal codebook and reconstructed picture are all included in the decoder portion. The received index table decodes the received indexed numbers. The optimally produced codebook at the receiver is identical to the codebook at the transmitter. The matching code words are allocated to the received index numbers, and the code words are organized so that the reconstructed picture is the same size as the input image. Eventually, a comparative assessment is performed to evaluate the proposed model. Especially, the computation time of the proposed model is 69.11%, 27.64%, 62.07%, 87.67%, 35.73%, 62.35%, and 14.11% better than the extant CSA, BFU-ROA, PSO, ROA, LA, SMO, and GA algorithms, respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"5 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79825136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a growing need for recommender systems and other ML-based systems as an abundance of data is now available across all industries. Various industries are currently using recommender systems in slightly different ways. These programs utilize algorithms to propose appropriate products to consumers based on their prior choices and interactions. Moreover, Systems for recommending events to users suggest pertinent happenings that they might find interesting. As opposed to an object recommender that suggests books or movies; event-based recommender systems typically require distinct algorithms. A developed event recommendation method is introduced which includes two stages: feature extraction and recommendation. In stage, I, a Set of features like personal willingness, community willingness, informative content, edge weight, and node interest degree are extracted. Stage II of the event recommendation system performs a hybrid classification by combining LSTM and CNN. In the LSTM classifier, optimal tuning is done by Improvised Cat and Mouse optimization (ICMO) algorithm. The results of the ICMO technique at an 80% training percentage have the maximum sensitivity value of 95.19%, whereas those of the existing approaches SSA, DINGO, BOA, and CMBO have values of 93.89%, 93.35%, 92.36%, and 92.24%. Finally, the best result is then determined by evaluating the whole performance.
{"title":"Optimal hybrid classification model for event recommendation system","authors":"Nithya Bn, D. Geetha, Manish Kumar","doi":"10.3233/web-220137","DOIUrl":"https://doi.org/10.3233/web-220137","url":null,"abstract":"There is a growing need for recommender systems and other ML-based systems as an abundance of data is now available across all industries. Various industries are currently using recommender systems in slightly different ways. These programs utilize algorithms to propose appropriate products to consumers based on their prior choices and interactions. Moreover, Systems for recommending events to users suggest pertinent happenings that they might find interesting. As opposed to an object recommender that suggests books or movies; event-based recommender systems typically require distinct algorithms. A developed event recommendation method is introduced which includes two stages: feature extraction and recommendation. In stage, I, a Set of features like personal willingness, community willingness, informative content, edge weight, and node interest degree are extracted. Stage II of the event recommendation system performs a hybrid classification by combining LSTM and CNN. In the LSTM classifier, optimal tuning is done by Improvised Cat and Mouse optimization (ICMO) algorithm. The results of the ICMO technique at an 80% training percentage have the maximum sensitivity value of 95.19%, whereas those of the existing approaches SSA, DINGO, BOA, and CMBO have values of 93.89%, 93.35%, 92.36%, and 92.24%. Finally, the best result is then determined by evaluating the whole performance.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"1 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82529035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alzheimer’s disease (AD), a neurodegenerative disorder, is the most common cause of dementia and continuing cognitive deficits. Since there are more cases each year, AD has grown to be a serious social and public health issue. Early detection of the diagnosis of Alzheimer’s and dementia disease is crucial, as is giving them the right care. The importance of early AD diagnosis has recently received a lot of attention. The patient cannot receive a timely diagnosis since the present methods of diagnosing AD take so long and are so expensive. That’s why we created a brand-new AD detection method that has four steps of operation: pre-processing, feature extraction, feature selection, and AD detection. During the pre-processing stage, the input data is pre-processed using an improved data normalization method. Following the pre-processing, these pre-processed data will go through a feature extraction procedure where features including statistical, enhanced entropy-based and mutual information-based features will be extracted. The appropriate features will be chosen from these extracted characteristics using the enhanced Chi-square technique. Based on the selected features, a hybrid model will be used in this study to detect AD. This hybrid model combines classifiers like Long Short Term Memory (LSTM) and Deep Maxout neural networks, and the weight parameters of LSTM and Deep Maxout will be optimized by the Self Updated Shuffled Shepherd Optimization Algorithm (SUSSOA). Our Proposed SUSSOA-based method’s statistical analysis of best values such as 57%, 53%, 28%, 25%, and 21% is higher than the other models like SSO, BMO, HGS, BRO, BES, and ISSO respectively.
{"title":"Intelligence model for Alzheimer’s disease detection with optimal trained deep hybrid model","authors":"Rajasree Rs, Brintha Rajakumari S","doi":"10.3233/web-220129","DOIUrl":"https://doi.org/10.3233/web-220129","url":null,"abstract":"Alzheimer’s disease (AD), a neurodegenerative disorder, is the most common cause of dementia and continuing cognitive deficits. Since there are more cases each year, AD has grown to be a serious social and public health issue. Early detection of the diagnosis of Alzheimer’s and dementia disease is crucial, as is giving them the right care. The importance of early AD diagnosis has recently received a lot of attention. The patient cannot receive a timely diagnosis since the present methods of diagnosing AD take so long and are so expensive. That’s why we created a brand-new AD detection method that has four steps of operation: pre-processing, feature extraction, feature selection, and AD detection. During the pre-processing stage, the input data is pre-processed using an improved data normalization method. Following the pre-processing, these pre-processed data will go through a feature extraction procedure where features including statistical, enhanced entropy-based and mutual information-based features will be extracted. The appropriate features will be chosen from these extracted characteristics using the enhanced Chi-square technique. Based on the selected features, a hybrid model will be used in this study to detect AD. This hybrid model combines classifiers like Long Short Term Memory (LSTM) and Deep Maxout neural networks, and the weight parameters of LSTM and Deep Maxout will be optimized by the Self Updated Shuffled Shepherd Optimization Algorithm (SUSSOA). Our Proposed SUSSOA-based method’s statistical analysis of best values such as 57%, 53%, 28%, 25%, and 21% is higher than the other models like SSO, BMO, HGS, BRO, BES, and ISSO respectively.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":"98 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77612508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}