A distributed system with a shared resource pool offers cloud computing services. According to the provider’s policy, customers can enjoy continuous access to these resources. Every time a job is transferred to the cloud to be carried out, the environment must be appropriately planned. A sufficient number of virtual machines (VM) must be accessible on the backend to do this. As a result, the scheduling method determines how well the system functions. An intelligent scheduling algorithm distributes the jobs among all VMs to balance the overall workload. This problem falls into the category of NP-Hard problems and is regarded as a load balancing problem. With spider monkey optimization, we have implemented a fresh strategy for more dependable and efficient load balancing in cloud environments. The suggested optimization strategy aims to boost performance by choosing the least-loaded VM to distribute the workloads. The simulation results clearly show that the proposed algorithm performs better regarding load balancing, reaction time, make span and resource utilization. The experimental results outperform the available approaches.
{"title":"Load balancing model for cloud environment using swarm intelligence technique","authors":"G. Verma, Soumen Kanrar","doi":"10.3233/mgs-230021","DOIUrl":"https://doi.org/10.3233/mgs-230021","url":null,"abstract":"A distributed system with a shared resource pool offers cloud computing services. According to the provider’s policy, customers can enjoy continuous access to these resources. Every time a job is transferred to the cloud to be carried out, the environment must be appropriately planned. A sufficient number of virtual machines (VM) must be accessible on the backend to do this. As a result, the scheduling method determines how well the system functions. An intelligent scheduling algorithm distributes the jobs among all VMs to balance the overall workload. This problem falls into the category of NP-Hard problems and is regarded as a load balancing problem. With spider monkey optimization, we have implemented a fresh strategy for more dependable and efficient load balancing in cloud environments. The suggested optimization strategy aims to boost performance by choosing the least-loaded VM to distribute the workloads. The simulation results clearly show that the proposed algorithm performs better regarding load balancing, reaction time, make span and resource utilization. The experimental results outperform the available approaches.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"104 8","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is an upcoming technology that has garnered interest from academic as well as commercial domains. Cloud offers the advantage of providing huge computing capability as well as resources that are positioned at multiple locations irrespective of time or location of the user. Cloud utilizes the concept of virtualization to dispatch the multiple tasks encountered simultaneously to the server. However, allocation of tasks to the heterogeneous servers requires that the load is balanced among the servers. To address this issue, a trust based dynamic load balancing algorithm in distributed file system is proposed. Load balancing is performed by predicting the loads in the physical machine with the help of the Rider optimization algorithm-based Neural Network (RideNN). Further, load balancing is carried out using the proposed Fractional Social Deer Optimization (FSDO) algorithm, where the virtual machine migration is performed based on the load condition in the physical machine. Later, replica management is accomplished for managing the replica in distributed file system with the help of the devised FSDO algorithm. Moreover, the proposed FSDO based dynamic load balancing algorithm is evaluated for its performance based on parameters, like predicted load, prediction error, trust, cost and energy consumption with values 0.051, 0.723, 0.390 and 0.431J correspondingly.
{"title":"Hybrid trust-based optimized virtual machine migration for dynamic load balancing and replica management in heterogeneous cloud","authors":"M. H. Nebagiri, Latha Pillappa Hanumanthappa","doi":"10.3233/mgs-230025","DOIUrl":"https://doi.org/10.3233/mgs-230025","url":null,"abstract":"Cloud computing is an upcoming technology that has garnered interest from academic as well as commercial domains. Cloud offers the advantage of providing huge computing capability as well as resources that are positioned at multiple locations irrespective of time or location of the user. Cloud utilizes the concept of virtualization to dispatch the multiple tasks encountered simultaneously to the server. However, allocation of tasks to the heterogeneous servers requires that the load is balanced among the servers. To address this issue, a trust based dynamic load balancing algorithm in distributed file system is proposed. Load balancing is performed by predicting the loads in the physical machine with the help of the Rider optimization algorithm-based Neural Network (RideNN). Further, load balancing is carried out using the proposed Fractional Social Deer Optimization (FSDO) algorithm, where the virtual machine migration is performed based on the load condition in the physical machine. Later, replica management is accomplished for managing the replica in distributed file system with the help of the devised FSDO algorithm. Moreover, the proposed FSDO based dynamic load balancing algorithm is evaluated for its performance based on parameters, like predicted load, prediction error, trust, cost and energy consumption with values 0.051, 0.723, 0.390 and 0.431J correspondingly.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"58 3","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social platform have disseminated the news in rapid speed and has been considered an important news resource for many people over worldwide because of easy access and less cost benefits when compared with the traditional news organizations. Fake news is the news deliberately written by bad writers that manipulates the original contents and this rapid dissemination of fake news may mislead the people in the society. As a result, it is critical to investigate the veracity of the data leaked via social media platforms. Even so, the reliability of information reported via this platform is still doubtful and remains a significant obstacle. As a result, this study proposes a promising technique for identifying fake information in social media called Adam Adadelta Optimization based Deep Long Short-Term Memory (Deep LSTM). The tokenization operation in this case is carried out with the Bidirectional Encoder Representations from Transformers (BERT) approach. The measurement of the features is reduced with the assistance of Kernel Linear Discriminant Analysis (LDA), and Singular Value Decomposition (SVD) and the top-N attributes are chosen by employing Renyi joint entropy. Furthermore, the LSTM is applied to identify false information in social media, with Adam Adadelta Optimization, which comprises a combo of Adam Optimization and Adadelta Optimization . The Deep LSTM based on Adam Adadelta Optimization achieved maximum accuracy, sensitivity, specificity of 0.936, 0.942, and 0.925.
社交平台传播新闻的速度非常快,与传统的新闻机构相比,社交平台具有获取方便、成本低的优势,因此被认为是全球许多人的重要新闻资源。假新闻是由不良撰稿人故意篡改原始内容撰写的新闻,这种假新闻的快速传播可能会误导社会大众。因此,调查通过社交媒体平台泄露的数据的真实性至关重要。即便如此,通过这一平台报道的信息的可靠性仍然值得怀疑,并且仍然是一个重大障碍。因此,本研究提出了一种识别社交媒体虚假信息的有效技术,即基于亚当-阿达德尔塔优化的深度长短期记忆(Deep LSTM)。在这种情况下,标记化操作是通过变压器双向编码器表示法(BERT)进行的。在核线性判别分析(LDA)和奇异值分解(SVD)的帮助下,减少了特征的测量,并通过使用仁义联合熵(Renyi joint entropy)选择了前 N 个属性。此外,LSTM 还利用 Adam Adadelta Optimization(由 Adam Optimization 和 Adadelta Optimization 组合而成)识别社交媒体中的虚假信息。基于 Adam Adadelta 优化的深度 LSTM 的准确度、灵敏度和特异度分别达到了 0.936、0.942 和 0.925。
{"title":"Adam Adadelta Optimization based bidirectional encoder representations from transformers model for fake news detection on social media","authors":"S. T. S., P.S. Sreeja","doi":"10.3233/mgs-230033","DOIUrl":"https://doi.org/10.3233/mgs-230033","url":null,"abstract":"Social platform have disseminated the news in rapid speed and has been considered an important news resource for many people over worldwide because of easy access and less cost benefits when compared with the traditional news organizations. Fake news is the news deliberately written by bad writers that manipulates the original contents and this rapid dissemination of fake news may mislead the people in the society. As a result, it is critical to investigate the veracity of the data leaked via social media platforms. Even so, the reliability of information reported via this platform is still doubtful and remains a significant obstacle. As a result, this study proposes a promising technique for identifying fake information in social media called Adam Adadelta Optimization based Deep Long Short-Term Memory (Deep LSTM). The tokenization operation in this case is carried out with the Bidirectional Encoder Representations from Transformers (BERT) approach. The measurement of the features is reduced with the assistance of Kernel Linear Discriminant Analysis (LDA), and Singular Value Decomposition (SVD) and the top-N attributes are chosen by employing Renyi joint entropy. Furthermore, the LSTM is applied to identify false information in social media, with Adam Adadelta Optimization, which comprises a combo of Adam Optimization and Adadelta Optimization . The Deep LSTM based on Adam Adadelta Optimization achieved maximum accuracy, sensitivity, specificity of 0.936, 0.942, and 0.925.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"23 2","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138997736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianfeng Meng, Gongpeng Zhang, Zihan Li, Hongji Yang
Crowdsourcing community, as an important way for enterprises to obtain external public innovative knowledge in the era of the Internet and the rise of users, has a very broad application prospect and research value. However, the influence of social preference is seldom considered in the promotion of knowledge sharing in crowdsourcing communities. Therefore, on the basis of complex network evolutionary game theory and social preference theory, an evolutionary game model of knowledge sharing among crowdsourcing community users based on the characteristics of small world network structure is constructed. Through Matlab programming, the evolution and dynamic equilibrium of knowledge sharing among crowdsourcing community users on this network structure are simulated, and the experimental results without considering social preference and social preference are compared and analysed, and it is found that social preference can significantly promote the evolution of knowledge sharing in crowdsourcing communities. This research expands the research scope of the combination and application of complex network games and other disciplines, enriches the theoretical perspective of knowledge sharing research in crowdsourcing communities, and has a strong guiding significance for promoting knowledge sharing in crowdsourcing communities.
{"title":"An evolutionary mechanism of social preference for knowledge sharing in crowdsourcing communities","authors":"Jianfeng Meng, Gongpeng Zhang, Zihan Li, Hongji Yang","doi":"10.3233/mgs-221532","DOIUrl":"https://doi.org/10.3233/mgs-221532","url":null,"abstract":"Crowdsourcing community, as an important way for enterprises to obtain external public innovative knowledge in the era of the Internet and the rise of users, has a very broad application prospect and research value. However, the influence of social preference is seldom considered in the promotion of knowledge sharing in crowdsourcing communities. Therefore, on the basis of complex network evolutionary game theory and social preference theory, an evolutionary game model of knowledge sharing among crowdsourcing community users based on the characteristics of small world network structure is constructed. Through Matlab programming, the evolution and dynamic equilibrium of knowledge sharing among crowdsourcing community users on this network structure are simulated, and the experimental results without considering social preference and social preference are compared and analysed, and it is found that social preference can significantly promote the evolution of knowledge sharing in crowdsourcing communities. This research expands the research scope of the combination and application of complex network games and other disciplines, enriches the theoretical perspective of knowledge sharing research in crowdsourcing communities, and has a strong guiding significance for promoting knowledge sharing in crowdsourcing communities.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"90 10","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classification of land cover using satellite images was a major area for the past few years. A raise in the quantity of data obtained by satellite image systems insists on the requirement for an automated tool for classification. Satellite images demonstrate temporal or/and spatial dependencies, where the traditional artificial intelligence approaches do not succeed to execute well. Hence, the suggested approach utilizes a brand-new framework for classifying land cover Histogram Linearisation is first carried out throughout pre-processing. The features are then retrieved, including spectral and spatial features. Additionally, the generated features are merged throughout the feature fusion process. Finally, at the classification phase, an optimized Long Short-Term Memory (LSTM) and Deep Belief Network (DBN) are introduced that portrays classified results in a precise way. Especially, the Opposition Behavior Learning based Water Wave Optimization (OBL-WWO) model is used for tuning the weights of LSTM and DBN. Atlast, many metrics illustrate the new approach’s effectiveness.
{"title":"Feature level fusion for land cover classification with landsat images: A hybrid classification model","authors":"Malige Gangappa","doi":"10.3233/mgs-230034","DOIUrl":"https://doi.org/10.3233/mgs-230034","url":null,"abstract":"Classification of land cover using satellite images was a major area for the past few years. A raise in the quantity of data obtained by satellite image systems insists on the requirement for an automated tool for classification. Satellite images demonstrate temporal or/and spatial dependencies, where the traditional artificial intelligence approaches do not succeed to execute well. Hence, the suggested approach utilizes a brand-new framework for classifying land cover Histogram Linearisation is first carried out throughout pre-processing. The features are then retrieved, including spectral and spatial features. Additionally, the generated features are merged throughout the feature fusion process. Finally, at the classification phase, an optimized Long Short-Term Memory (LSTM) and Deep Belief Network (DBN) are introduced that portrays classified results in a precise way. Especially, the Opposition Behavior Learning based Water Wave Optimization (OBL-WWO) model is used for tuning the weights of LSTM and DBN. Atlast, many metrics illustrate the new approach’s effectiveness.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.
在数据挖掘中,深度学习和机器学习模型面临着类不平衡问题,导致对少数类样本的检测率较低。提出了一种改进的合成少数派过采样技术(SMOTE),用于非平衡数据的有效分类。收集PIMA、Yeast、E.coli、Breast cancer Wisconsin数据库的原始数据后,采用min-max归一化、清洗、整合、数据转换等技术进行预处理,使数据具有更好的唯一性、一致性、完整性和有效性。采用改进的SMOTE算法对预处理数据进行适当的数据分布,然后将适当分布的数据提供给机器学习分类器:支持向量机(SVM)、随机森林(Random Forest)和决策树(Decision Tree)进行数据分类。实验验证改进的SMOTE算法在PIMA、酵母菌、大肠杆菌和乳腺癌Wisconsin数据库上取得了显著的分类效果,曲线下面积(Area under Curve, AUC)分别为94.30%、91%、96.40%和99.40%。
{"title":"Imbalanced data classification using improved synthetic minority over-sampling technique","authors":"Yamijala Anusha, R. Visalakshi, Konda Srinivas","doi":"10.3233/mgs-230007","DOIUrl":"https://doi.org/10.3233/mgs-230007","url":null,"abstract":"In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent times, recommendation systems provide suggestions for users by means of songs, products, movies, books, etc. based on a database. Usually, the movie recommendation system predicts the movies liked by the user based on attributes present in the database. The movie recommendation system is one of the widespread, useful and efficient applications for individuals in watching movies with minimal decision time. Several attempts are made by the researchers in resolving these problems like purchasing books, watching movies, etc. through developing a recommendation system. The majority of recommendation systems fail in addressing data sparsity, cold start issues, and malicious attacks. To overcome the above-stated problems, a new movie recommendation system is developed in this manuscript. Initially, the input data is acquired from Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases. Next, the data are rescaled using a min-max normalization technique that helps in handling the outlier efficiently. At last, the denoised data are fed to the improved DenseNet model for a relevant movie recommendation, where the developed model includes a weighting factor and class-balanced loss function for better handling of overfitting risk. Then, the experimental result indicates that the improved DenseNet model almost reduced by 5 to 10% of error values, and improved by around 2% of f-measure, precision, and recall values related to the conventional models on the Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases.
{"title":"Effective movie recommendation based on improved densenet model","authors":"V. Lakshmi Chetana, Raj Kumar Batchu, Prasad Devarasetty, Srilakshmi Voddelli, Varun Prasad Dalli","doi":"10.3233/mgs-230012","DOIUrl":"https://doi.org/10.3233/mgs-230012","url":null,"abstract":"In recent times, recommendation systems provide suggestions for users by means of songs, products, movies, books, etc. based on a database. Usually, the movie recommendation system predicts the movies liked by the user based on attributes present in the database. The movie recommendation system is one of the widespread, useful and efficient applications for individuals in watching movies with minimal decision time. Several attempts are made by the researchers in resolving these problems like purchasing books, watching movies, etc. through developing a recommendation system. The majority of recommendation systems fail in addressing data sparsity, cold start issues, and malicious attacks. To overcome the above-stated problems, a new movie recommendation system is developed in this manuscript. Initially, the input data is acquired from Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases. Next, the data are rescaled using a min-max normalization technique that helps in handling the outlier efficiently. At last, the denoised data are fed to the improved DenseNet model for a relevant movie recommendation, where the developed model includes a weighting factor and class-balanced loss function for better handling of overfitting risk. Then, the experimental result indicates that the improved DenseNet model almost reduced by 5 to 10% of error values, and improved by around 2% of f-measure, precision, and recall values related to the conventional models on the Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinu P. Sainudeen, Ceronmani Sharmila V, Parvathi R
During the past few decades, melanoma has grown increasingly prevalent, and timely identification is crucial for lowering the mortality rates linked to this kind of skin cancer. Because of this, having access to an automated, trustworthy system that can identify the existence of melanoma may be very helpful in the field of medical diagnostics. Because of this, we have introduced a revolutionary, five-stage method for detecting skin cancer. The input images are processed utilizing histogram equalization as well as Gaussian filtering techniques during the initial pre-processing stage. An Improved Balanced Iterative Reducing as well as Clustering utilizing Hierarchies (I-BIRCH) is proposed to provide better image segmentation by efficiently allotting the labels to the pixels. From those segmented images, features such as Improved Local Vector Pattern, local ternary pattern, and Grey level co-occurrence matrix as well as the local gradient patterns will be retrieved in the third stage. We proposed an Arithmetic Operated Honey Badger Algorithm (AOHBA) to choose the best features from the retrieved characteristics, which lowered the computational expense as well as training time. In order to demonstrate the effectiveness of our proposed skin cancer detection strategy, the categorization is done using an improved Deep Belief Network (DBN) with respect to those chosen features. The performance assessment findings are then matched with existing methodologies.
{"title":"Skin cancer detection: Improved deep belief network with optimal feature selection","authors":"Jinu P. Sainudeen, Ceronmani Sharmila V, Parvathi R","doi":"10.3233/mgs-230040","DOIUrl":"https://doi.org/10.3233/mgs-230040","url":null,"abstract":"During the past few decades, melanoma has grown increasingly prevalent, and timely identification is crucial for lowering the mortality rates linked to this kind of skin cancer. Because of this, having access to an automated, trustworthy system that can identify the existence of melanoma may be very helpful in the field of medical diagnostics. Because of this, we have introduced a revolutionary, five-stage method for detecting skin cancer. The input images are processed utilizing histogram equalization as well as Gaussian filtering techniques during the initial pre-processing stage. An Improved Balanced Iterative Reducing as well as Clustering utilizing Hierarchies (I-BIRCH) is proposed to provide better image segmentation by efficiently allotting the labels to the pixels. From those segmented images, features such as Improved Local Vector Pattern, local ternary pattern, and Grey level co-occurrence matrix as well as the local gradient patterns will be retrieved in the third stage. We proposed an Arithmetic Operated Honey Badger Algorithm (AOHBA) to choose the best features from the retrieved characteristics, which lowered the computational expense as well as training time. In order to demonstrate the effectiveness of our proposed skin cancer detection strategy, the categorization is done using an improved Deep Belief Network (DBN) with respect to those chosen features. The performance assessment findings are then matched with existing methodologies.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jagannath E. Nalavade, Chandra Sekhar Kolli, Sanjay Nakharu Prasad Kumar
Conventional recommendation techniques utilize various methods to compute the similarity among products and customers in order to identify the customer preferences. However, such conventional similarity computation techniques may produce incomplete information influenced by similarity measures in customers’ preferences, which leads to poor accuracy on recommendation. Hence, this paper introduced the novel and effective recommendation technique, namely Deep Embedded Clustering with matrix factorization (DEC with matrix factorization) for the collaborative recommendation. This approach creates the agglomerative matrix for the recommendation using the review data. The customer series matrix, customer series binary matrix, product series matrix, and product series binary matrix make up the agglomerative matrix. The product grouping is carried out to group the similar products using DEC for retrieving the optimal product. Moreover, the bi-level matching generates the best group customer sequence in which the relevant customers are retrieved using tversky index and angular distance. Also, the final product suggestion is made using matrix factorization, with the goal of recommending to clients the product with the highest rating. Also, according to the experimental results, the developed DEC with the matrix factorization approach produced better results with respect to f-measure values of 0.902, precision values of 0.896, and recall values of 0.908, respectively.
传统的推荐技术利用各种方法来计算产品和顾客之间的相似度,以确定顾客的偏好。然而,这种传统的相似度计算技术可能会受到客户偏好中相似度度量的影响而产生不完整的信息,从而导致推荐的准确性较差。为此,本文提出了一种新颖有效的协同推荐技术——基于矩阵分解的深度嵌入聚类(DEC with matrix factorization)。这种方法使用审查数据为推荐创建聚合矩阵。顾客级数矩阵、顾客级数二进制矩阵、产品级数矩阵、产品级数二进制矩阵构成凝聚矩阵。利用DEC对同类产品进行分组,检索最优产品。利用tversky指数和角距离对相关客户进行检索,生成最佳群客户序列。最后,使用矩阵分解法给出产品建议,目的是向客户推荐评分最高的产品。实验结果表明,采用矩阵分解方法开发的DEC的f-measure值为0.902,精密度值为0.896,召回率为0.908。
{"title":"Deep embedded clustering with matrix factorization based user rating prediction for collaborative recommendation","authors":"Jagannath E. Nalavade, Chandra Sekhar Kolli, Sanjay Nakharu Prasad Kumar","doi":"10.3233/mgs-230039","DOIUrl":"https://doi.org/10.3233/mgs-230039","url":null,"abstract":"Conventional recommendation techniques utilize various methods to compute the similarity among products and customers in order to identify the customer preferences. However, such conventional similarity computation techniques may produce incomplete information influenced by similarity measures in customers’ preferences, which leads to poor accuracy on recommendation. Hence, this paper introduced the novel and effective recommendation technique, namely Deep Embedded Clustering with matrix factorization (DEC with matrix factorization) for the collaborative recommendation. This approach creates the agglomerative matrix for the recommendation using the review data. The customer series matrix, customer series binary matrix, product series matrix, and product series binary matrix make up the agglomerative matrix. The product grouping is carried out to group the similar products using DEC for retrieving the optimal product. Moreover, the bi-level matching generates the best group customer sequence in which the relevant customers are retrieved using tversky index and angular distance. Also, the final product suggestion is made using matrix factorization, with the goal of recommending to clients the product with the highest rating. Also, according to the experimental results, the developed DEC with the matrix factorization approach produced better results with respect to f-measure values of 0.902, precision values of 0.896, and recall values of 0.908, respectively.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norman Dias, Mouleeswaran Singanallur Kumaresan, Reeja Sundaran Rajakumari
The password used to authenticate users is vulnerable to shoulder-surfing assaults, in which attackers directly observe users and steal their passwords without using any other technical upkeep. The graphical password system is regarded as a likely backup plan to the alphanumeric password system. Additionally, for system privacy and security, a number of programs make considerable use of the graphical password-based authentication method. The user chooses the image for the authentication procedure when using a graphical password. Furthermore, graphical password approaches are more secure than text-based password methods. In this paper, the effective graphical password authentication model, named as Deep Residual Network based Graphical Password is introduced. Generally, the graphical password authentication process includes three phases, namely registration, login, and authentication. The secret pass image selection and challenge set generation process is employed in the two-step registration process. The challenge set generation is mainly carried out based on the generation of decoy and pass images by performing an edge detection process. In addition, edge detection is performed using the Deep Residual Network classifier. The developed Deep Residual Network based Graphical Password algorithm outperformance than other existing graphical password authentication methods in terms of Information Retention Rate and Password Diversity Score of 0.1716 and 0.1643, respectively.
{"title":"Deep learning based graphical password authentication approach against shoulder-surfing attacks","authors":"Norman Dias, Mouleeswaran Singanallur Kumaresan, Reeja Sundaran Rajakumari","doi":"10.3233/mgs-230024","DOIUrl":"https://doi.org/10.3233/mgs-230024","url":null,"abstract":"The password used to authenticate users is vulnerable to shoulder-surfing assaults, in which attackers directly observe users and steal their passwords without using any other technical upkeep. The graphical password system is regarded as a likely backup plan to the alphanumeric password system. Additionally, for system privacy and security, a number of programs make considerable use of the graphical password-based authentication method. The user chooses the image for the authentication procedure when using a graphical password. Furthermore, graphical password approaches are more secure than text-based password methods. In this paper, the effective graphical password authentication model, named as Deep Residual Network based Graphical Password is introduced. Generally, the graphical password authentication process includes three phases, namely registration, login, and authentication. The secret pass image selection and challenge set generation process is employed in the two-step registration process. The challenge set generation is mainly carried out based on the generation of decoy and pass images by performing an edge detection process. In addition, edge detection is performed using the Deep Residual Network classifier. The developed Deep Residual Network based Graphical Password algorithm outperformance than other existing graphical password authentication methods in terms of Information Retention Rate and Password Diversity Score of 0.1716 and 0.1643, respectively.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"1 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}