In recent times, recommendation systems provide suggestions for users by means of songs, products, movies, books, etc. based on a database. Usually, the movie recommendation system predicts the movies liked by the user based on attributes present in the database. The movie recommendation system is one of the widespread, useful and efficient applications for individuals in watching movies with minimal decision time. Several attempts are made by the researchers in resolving these problems like purchasing books, watching movies, etc. through developing a recommendation system. The majority of recommendation systems fail in addressing data sparsity, cold start issues, and malicious attacks. To overcome the above-stated problems, a new movie recommendation system is developed in this manuscript. Initially, the input data is acquired from Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases. Next, the data are rescaled using a min-max normalization technique that helps in handling the outlier efficiently. At last, the denoised data are fed to the improved DenseNet model for a relevant movie recommendation, where the developed model includes a weighting factor and class-balanced loss function for better handling of overfitting risk. Then, the experimental result indicates that the improved DenseNet model almost reduced by 5 to 10% of error values, and improved by around 2% of f-measure, precision, and recall values related to the conventional models on the Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases.
{"title":"Effective movie recommendation based on improved densenet model","authors":"V. Lakshmi Chetana, Raj Kumar Batchu, Prasad Devarasetty, Srilakshmi Voddelli, Varun Prasad Dalli","doi":"10.3233/mgs-230012","DOIUrl":"https://doi.org/10.3233/mgs-230012","url":null,"abstract":"In recent times, recommendation systems provide suggestions for users by means of songs, products, movies, books, etc. based on a database. Usually, the movie recommendation system predicts the movies liked by the user based on attributes present in the database. The movie recommendation system is one of the widespread, useful and efficient applications for individuals in watching movies with minimal decision time. Several attempts are made by the researchers in resolving these problems like purchasing books, watching movies, etc. through developing a recommendation system. The majority of recommendation systems fail in addressing data sparsity, cold start issues, and malicious attacks. To overcome the above-stated problems, a new movie recommendation system is developed in this manuscript. Initially, the input data is acquired from Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases. Next, the data are rescaled using a min-max normalization technique that helps in handling the outlier efficiently. At last, the denoised data are fed to the improved DenseNet model for a relevant movie recommendation, where the developed model includes a weighting factor and class-balanced loss function for better handling of overfitting risk. Then, the experimental result indicates that the improved DenseNet model almost reduced by 5 to 10% of error values, and improved by around 2% of f-measure, precision, and recall values related to the conventional models on the Movielens 1M, Movielens 100K, Yahoo Y-10-10, and Yahoo Y-20-20 databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinu P. Sainudeen, Ceronmani Sharmila V, Parvathi R
During the past few decades, melanoma has grown increasingly prevalent, and timely identification is crucial for lowering the mortality rates linked to this kind of skin cancer. Because of this, having access to an automated, trustworthy system that can identify the existence of melanoma may be very helpful in the field of medical diagnostics. Because of this, we have introduced a revolutionary, five-stage method for detecting skin cancer. The input images are processed utilizing histogram equalization as well as Gaussian filtering techniques during the initial pre-processing stage. An Improved Balanced Iterative Reducing as well as Clustering utilizing Hierarchies (I-BIRCH) is proposed to provide better image segmentation by efficiently allotting the labels to the pixels. From those segmented images, features such as Improved Local Vector Pattern, local ternary pattern, and Grey level co-occurrence matrix as well as the local gradient patterns will be retrieved in the third stage. We proposed an Arithmetic Operated Honey Badger Algorithm (AOHBA) to choose the best features from the retrieved characteristics, which lowered the computational expense as well as training time. In order to demonstrate the effectiveness of our proposed skin cancer detection strategy, the categorization is done using an improved Deep Belief Network (DBN) with respect to those chosen features. The performance assessment findings are then matched with existing methodologies.
{"title":"Skin cancer detection: Improved deep belief network with optimal feature selection","authors":"Jinu P. Sainudeen, Ceronmani Sharmila V, Parvathi R","doi":"10.3233/mgs-230040","DOIUrl":"https://doi.org/10.3233/mgs-230040","url":null,"abstract":"During the past few decades, melanoma has grown increasingly prevalent, and timely identification is crucial for lowering the mortality rates linked to this kind of skin cancer. Because of this, having access to an automated, trustworthy system that can identify the existence of melanoma may be very helpful in the field of medical diagnostics. Because of this, we have introduced a revolutionary, five-stage method for detecting skin cancer. The input images are processed utilizing histogram equalization as well as Gaussian filtering techniques during the initial pre-processing stage. An Improved Balanced Iterative Reducing as well as Clustering utilizing Hierarchies (I-BIRCH) is proposed to provide better image segmentation by efficiently allotting the labels to the pixels. From those segmented images, features such as Improved Local Vector Pattern, local ternary pattern, and Grey level co-occurrence matrix as well as the local gradient patterns will be retrieved in the third stage. We proposed an Arithmetic Operated Honey Badger Algorithm (AOHBA) to choose the best features from the retrieved characteristics, which lowered the computational expense as well as training time. In order to demonstrate the effectiveness of our proposed skin cancer detection strategy, the categorization is done using an improved Deep Belief Network (DBN) with respect to those chosen features. The performance assessment findings are then matched with existing methodologies.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jagannath E. Nalavade, Chandra Sekhar Kolli, Sanjay Nakharu Prasad Kumar
Conventional recommendation techniques utilize various methods to compute the similarity among products and customers in order to identify the customer preferences. However, such conventional similarity computation techniques may produce incomplete information influenced by similarity measures in customers’ preferences, which leads to poor accuracy on recommendation. Hence, this paper introduced the novel and effective recommendation technique, namely Deep Embedded Clustering with matrix factorization (DEC with matrix factorization) for the collaborative recommendation. This approach creates the agglomerative matrix for the recommendation using the review data. The customer series matrix, customer series binary matrix, product series matrix, and product series binary matrix make up the agglomerative matrix. The product grouping is carried out to group the similar products using DEC for retrieving the optimal product. Moreover, the bi-level matching generates the best group customer sequence in which the relevant customers are retrieved using tversky index and angular distance. Also, the final product suggestion is made using matrix factorization, with the goal of recommending to clients the product with the highest rating. Also, according to the experimental results, the developed DEC with the matrix factorization approach produced better results with respect to f-measure values of 0.902, precision values of 0.896, and recall values of 0.908, respectively.
传统的推荐技术利用各种方法来计算产品和顾客之间的相似度,以确定顾客的偏好。然而,这种传统的相似度计算技术可能会受到客户偏好中相似度度量的影响而产生不完整的信息,从而导致推荐的准确性较差。为此,本文提出了一种新颖有效的协同推荐技术——基于矩阵分解的深度嵌入聚类(DEC with matrix factorization)。这种方法使用审查数据为推荐创建聚合矩阵。顾客级数矩阵、顾客级数二进制矩阵、产品级数矩阵、产品级数二进制矩阵构成凝聚矩阵。利用DEC对同类产品进行分组,检索最优产品。利用tversky指数和角距离对相关客户进行检索,生成最佳群客户序列。最后,使用矩阵分解法给出产品建议,目的是向客户推荐评分最高的产品。实验结果表明,采用矩阵分解方法开发的DEC的f-measure值为0.902,精密度值为0.896,召回率为0.908。
{"title":"Deep embedded clustering with matrix factorization based user rating prediction for collaborative recommendation","authors":"Jagannath E. Nalavade, Chandra Sekhar Kolli, Sanjay Nakharu Prasad Kumar","doi":"10.3233/mgs-230039","DOIUrl":"https://doi.org/10.3233/mgs-230039","url":null,"abstract":"Conventional recommendation techniques utilize various methods to compute the similarity among products and customers in order to identify the customer preferences. However, such conventional similarity computation techniques may produce incomplete information influenced by similarity measures in customers’ preferences, which leads to poor accuracy on recommendation. Hence, this paper introduced the novel and effective recommendation technique, namely Deep Embedded Clustering with matrix factorization (DEC with matrix factorization) for the collaborative recommendation. This approach creates the agglomerative matrix for the recommendation using the review data. The customer series matrix, customer series binary matrix, product series matrix, and product series binary matrix make up the agglomerative matrix. The product grouping is carried out to group the similar products using DEC for retrieving the optimal product. Moreover, the bi-level matching generates the best group customer sequence in which the relevant customers are retrieved using tversky index and angular distance. Also, the final product suggestion is made using matrix factorization, with the goal of recommending to clients the product with the highest rating. Also, according to the experimental results, the developed DEC with the matrix factorization approach produced better results with respect to f-measure values of 0.902, precision values of 0.896, and recall values of 0.908, respectively.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norman Dias, Mouleeswaran Singanallur Kumaresan, Reeja Sundaran Rajakumari
The password used to authenticate users is vulnerable to shoulder-surfing assaults, in which attackers directly observe users and steal their passwords without using any other technical upkeep. The graphical password system is regarded as a likely backup plan to the alphanumeric password system. Additionally, for system privacy and security, a number of programs make considerable use of the graphical password-based authentication method. The user chooses the image for the authentication procedure when using a graphical password. Furthermore, graphical password approaches are more secure than text-based password methods. In this paper, the effective graphical password authentication model, named as Deep Residual Network based Graphical Password is introduced. Generally, the graphical password authentication process includes three phases, namely registration, login, and authentication. The secret pass image selection and challenge set generation process is employed in the two-step registration process. The challenge set generation is mainly carried out based on the generation of decoy and pass images by performing an edge detection process. In addition, edge detection is performed using the Deep Residual Network classifier. The developed Deep Residual Network based Graphical Password algorithm outperformance than other existing graphical password authentication methods in terms of Information Retention Rate and Password Diversity Score of 0.1716 and 0.1643, respectively.
{"title":"Deep learning based graphical password authentication approach against shoulder-surfing attacks","authors":"Norman Dias, Mouleeswaran Singanallur Kumaresan, Reeja Sundaran Rajakumari","doi":"10.3233/mgs-230024","DOIUrl":"https://doi.org/10.3233/mgs-230024","url":null,"abstract":"The password used to authenticate users is vulnerable to shoulder-surfing assaults, in which attackers directly observe users and steal their passwords without using any other technical upkeep. The graphical password system is regarded as a likely backup plan to the alphanumeric password system. Additionally, for system privacy and security, a number of programs make considerable use of the graphical password-based authentication method. The user chooses the image for the authentication procedure when using a graphical password. Furthermore, graphical password approaches are more secure than text-based password methods. In this paper, the effective graphical password authentication model, named as Deep Residual Network based Graphical Password is introduced. Generally, the graphical password authentication process includes three phases, namely registration, login, and authentication. The secret pass image selection and challenge set generation process is employed in the two-step registration process. The challenge set generation is mainly carried out based on the generation of decoy and pass images by performing an edge detection process. In addition, edge detection is performed using the Deep Residual Network classifier. The developed Deep Residual Network based Graphical Password algorithm outperformance than other existing graphical password authentication methods in terms of Information Retention Rate and Password Diversity Score of 0.1716 and 0.1643, respectively.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the present scenario, Electrocardiogram (ECG) is an effective non-invasive clinical tool, which reveals the functionality and rhythm of the heart. The non-stationary nature of ECG signal, noise existence, and heartbeat abnormality makes it difficult for clinicians to diagnose arrhythmia. The most of the existing models concentrate only on classification accuracy. In this manuscript, an automated model is introduced that concentrates on arrhythmia type classification using ECG signals, and also focuses on computational complexity and time. After collecting the signals from the MIT-BIH database, the signal transformation and decomposition are performed by Multiscale Local Polynomial Transform (MLPT) and Ensemble Empirical Mode Decomposition (EEMD). The decomposed ECG signals are given to the feature extraction phase for extracting features. The feature extraction phase includes six techniques: standard deviation, zero crossing rate, mean curve length, Hjorth parameters, mean Teager energy, and log energy entropy. Next, the feature dimensionality reduction and arrhythmia classification are performed utilizing the improved Firefly Optimization Algorithm and autoencoder. The selection of optimal feature vectors by the improved Firefly Optimization Algorithm reduces the computational complexity to linear and consumes computational time of 18.23 seconds. The improved Firefly Optimization Algorithm and autoencoder model achieved 98.96% of accuracy in the arrhythmia type classification, which is higher than the comparative models.
{"title":"Analysis and classification of arrhythmia types using improved firefly optimization algorithm and autoencoder model","authors":"Mala Sinnoor, Shanthi Kaliyil Janardhan","doi":"10.3233/mgs-230022","DOIUrl":"https://doi.org/10.3233/mgs-230022","url":null,"abstract":"In the present scenario, Electrocardiogram (ECG) is an effective non-invasive clinical tool, which reveals the functionality and rhythm of the heart. The non-stationary nature of ECG signal, noise existence, and heartbeat abnormality makes it difficult for clinicians to diagnose arrhythmia. The most of the existing models concentrate only on classification accuracy. In this manuscript, an automated model is introduced that concentrates on arrhythmia type classification using ECG signals, and also focuses on computational complexity and time. After collecting the signals from the MIT-BIH database, the signal transformation and decomposition are performed by Multiscale Local Polynomial Transform (MLPT) and Ensemble Empirical Mode Decomposition (EEMD). The decomposed ECG signals are given to the feature extraction phase for extracting features. The feature extraction phase includes six techniques: standard deviation, zero crossing rate, mean curve length, Hjorth parameters, mean Teager energy, and log energy entropy. Next, the feature dimensionality reduction and arrhythmia classification are performed utilizing the improved Firefly Optimization Algorithm and autoencoder. The selection of optimal feature vectors by the improved Firefly Optimization Algorithm reduces the computational complexity to linear and consumes computational time of 18.23 seconds. The improved Firefly Optimization Algorithm and autoencoder model achieved 98.96% of accuracy in the arrhythmia type classification, which is higher than the comparative models.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayyoub Kalache, M. Badri, Farid Mokhati, M. C. Babahenini
Multi-agent systems are proposed as a solution to mitigate nowadays software requirements: open and distributed architectures with dynamic and adaptive behaviour. Like any other software, multi-agent systems development process is error-prone; thus testing is a key activity to ensure the quality of the developed product. This paper sheds light on agent testing as it is the primary artefact for any multi-agent system’s testing process. A framework called JADE Testing Framework (JTF) for JADE platform’s agent testing is proposed. JTF allows testing agents at two levels: unit (inner-components) and agent (agent interactions) levels. JTF is the result of the integration of two testing solutions: JAT a well-known framework for JADE’s agent’s interaction testing and UJade, a new solution that was developed for agent’s unit testing. UJade provides also a toolbox that allows for enhancing JAT capabilities. The evidence of JTF usability and effectiveness in JADE agent testing was supported by an empirical study conducted on seven multi-agent systems. The results of the study show that: when an agent’s code can be tested either at agent or unit levels UJade is less test’s effort consuming than JAT; JTF provides better testing capabilities and the developed tests are more effective than those developed using UJade or JAT alone.
{"title":"A testing framework for JADE agent-based software","authors":"Ayyoub Kalache, M. Badri, Farid Mokhati, M. C. Babahenini","doi":"10.3233/mgs-230023","DOIUrl":"https://doi.org/10.3233/mgs-230023","url":null,"abstract":"Multi-agent systems are proposed as a solution to mitigate nowadays software requirements: open and distributed architectures with dynamic and adaptive behaviour. Like any other software, multi-agent systems development process is error-prone; thus testing is a key activity to ensure the quality of the developed product. This paper sheds light on agent testing as it is the primary artefact for any multi-agent system’s testing process. A framework called JADE Testing Framework (JTF) for JADE platform’s agent testing is proposed. JTF allows testing agents at two levels: unit (inner-components) and agent (agent interactions) levels. JTF is the result of the integration of two testing solutions: JAT a well-known framework for JADE’s agent’s interaction testing and UJade, a new solution that was developed for agent’s unit testing. UJade provides also a toolbox that allows for enhancing JAT capabilities. The evidence of JTF usability and effectiveness in JADE agent testing was supported by an empirical study conducted on seven multi-agent systems. The results of the study show that: when an agent’s code can be tested either at agent or unit levels UJade is less test’s effort consuming than JAT; JTF provides better testing capabilities and the developed tests are more effective than those developed using UJade or JAT alone.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49445783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software reliability refers to the ability of a system to perform its intended function under specified conditions for a specified period of time. The first critical step in the software reliability testing process is to create a Software Operational Profile (SOP). Several methodologies for creating SOP have been proposed. Nonetheless, nearly all the proposed studies have neglected the uniqueness of the new software paradigms, despite the fact that these are generally distinguished by their own concepts and methodologies. One of these software paradigms is multi-agent systems. Rather than using a generic one, it would be more useful to propose a specific methodology for creating SOP. In this paper, we propose a methodology for developing Operational Profile for specific kinds of multi-agent systems (so-called normative multi-agent systems). A detailed case study is used to demonstrate this methodology.
{"title":"Operational profile development methodology for normative multi-agent systems","authors":"Yahia Menassel, Toufik Marir, Farid Mokhati","doi":"10.3233/mgs-221507","DOIUrl":"https://doi.org/10.3233/mgs-221507","url":null,"abstract":"Software reliability refers to the ability of a system to perform its intended function under specified conditions for a specified period of time. The first critical step in the software reliability testing process is to create a Software Operational Profile (SOP). Several methodologies for creating SOP have been proposed. Nonetheless, nearly all the proposed studies have neglected the uniqueness of the new software paradigms, despite the fact that these are generally distinguished by their own concepts and methodologies. One of these software paradigms is multi-agent systems. Rather than using a generic one, it would be more useful to propose a specific methodology for creating SOP. In this paper, we propose a methodology for developing Operational Profile for specific kinds of multi-agent systems (so-called normative multi-agent systems). A detailed case study is used to demonstrate this methodology.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46546817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. S. N. Kanuboyina, T. Shankar, Rama Raju Venkata Penmetsa
In recent decades, the automatic emotion state classification is an important technology for human-machine interactions. In Electroencephalography (EEG) based emotion classification, most of the existing methodologies cannot capture the context information of the EEG signal and ignore the correlation information between dissimilar EEG channels. Therefore, in this study, a deep learning based automatic method is proposed for effective emotion state classification. Firstly, the EEG signals were acquired from the real time and databases for emotion analysis using physiological signals (DEAP), and further, the band-pass filter from 0.3 Hz to 45 Hz is utilized to eliminate both high and low-frequency noise. Next, two feature extraction techniques power spectral density and differential entropy were employed for extracting active feature values, which effectively learn the contextual and spatial information of EEG signals. Finally, principal component analysis and artificial neural network were developed for feature dimensionality reduction and emotion state classification. The experimental evaluation showed that the proposed method achieved 96.38% and 97.36% of accuracy on DEAP, and 92.33% and 89.37% of accuracy on a real-time database for arousal and valence emotion states. The achieved recognition accuracy is higher compared to the support vector machine on both databases.
{"title":"Electroencephalography based human emotion state classification using principal component analysis and artificial neural network","authors":"V. S. N. Kanuboyina, T. Shankar, Rama Raju Venkata Penmetsa","doi":"10.3233/mgs-220333","DOIUrl":"https://doi.org/10.3233/mgs-220333","url":null,"abstract":"In recent decades, the automatic emotion state classification is an important technology for human-machine interactions. In Electroencephalography (EEG) based emotion classification, most of the existing methodologies cannot capture the context information of the EEG signal and ignore the correlation information between dissimilar EEG channels. Therefore, in this study, a deep learning based automatic method is proposed for effective emotion state classification. Firstly, the EEG signals were acquired from the real time and databases for emotion analysis using physiological signals (DEAP), and further, the band-pass filter from 0.3 Hz to 45 Hz is utilized to eliminate both high and low-frequency noise. Next, two feature extraction techniques power spectral density and differential entropy were employed for extracting active feature values, which effectively learn the contextual and spatial information of EEG signals. Finally, principal component analysis and artificial neural network were developed for feature dimensionality reduction and emotion state classification. The experimental evaluation showed that the proposed method achieved 96.38% and 97.36% of accuracy on DEAP, and 92.33% and 89.37% of accuracy on a real-time database for arousal and valence emotion states. The achieved recognition accuracy is higher compared to the support vector machine on both databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87069357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A wide variety of uses, such as video interpretation and surveillance, human-robot interaction, healthcare, and sport analysis, among others, make this technology extremely useful, human activity recognition has received a lot of attention in recent decades. human activity recognition from video frames or still images is a challenging procedure because of factors including viewpoint, partial occlusion, lighting, background clutter, scale differences, and look. Numerous applications, including human-computer interfaces, robotics for the analysis of human behavior, and video surveillance systems all require the activity recognition system. This work introduces the human activity recognition system, which includes 3 stages: preprocessing, feature extraction, and classification. The input video (image frames) are subjected for preprocessing stage which is processed with median filtering and background subtraction. Several features, including the Improved Bag of Visual Words, the local texton XOR pattern, and the Spider Local Picture Feature (SLIF) based features, are extracted from the pre-processed image. The next step involves classifying data using a hybrid classifier that blends Bidirectional Gated Recurrent (Bi-GRU) and Long Short Term Memory (LSTM). To boost the effectiveness of the suggested system, the weights of the Long Short Term Memory (LSTM) and Bidirectional Gated Recurrent (Bi-GRU) are both ideally determined using the Improved Aquila Optimization with City Block Distance Evaluation (IACBD) method. Finally, the effectiveness of the suggested approach is evaluated in comparison to other traditional models using various performance metrics.
{"title":"Hybrid classifier model with tuned weights for human activity recognition","authors":"Anshuman Tyagi, Pawan Singh, H. Dev","doi":"10.3233/mgs-220328","DOIUrl":"https://doi.org/10.3233/mgs-220328","url":null,"abstract":"A wide variety of uses, such as video interpretation and surveillance, human-robot interaction, healthcare, and sport analysis, among others, make this technology extremely useful, human activity recognition has received a lot of attention in recent decades. human activity recognition from video frames or still images is a challenging procedure because of factors including viewpoint, partial occlusion, lighting, background clutter, scale differences, and look. Numerous applications, including human-computer interfaces, robotics for the analysis of human behavior, and video surveillance systems all require the activity recognition system. This work introduces the human activity recognition system, which includes 3 stages: preprocessing, feature extraction, and classification. The input video (image frames) are subjected for preprocessing stage which is processed with median filtering and background subtraction. Several features, including the Improved Bag of Visual Words, the local texton XOR pattern, and the Spider Local Picture Feature (SLIF) based features, are extracted from the pre-processed image. The next step involves classifying data using a hybrid classifier that blends Bidirectional Gated Recurrent (Bi-GRU) and Long Short Term Memory (LSTM). To boost the effectiveness of the suggested system, the weights of the Long Short Term Memory (LSTM) and Bidirectional Gated Recurrent (Bi-GRU) are both ideally determined using the Improved Aquila Optimization with City Block Distance Evaluation (IACBD) method. Finally, the effectiveness of the suggested approach is evaluated in comparison to other traditional models using various performance metrics.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88386426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a new approach for coordinating generated agents’ plans dynamically. The purpose is to take into consideration new conflicts introduced in new versions of agents’ plans. The approach consists in finding the best combination which contains one plan for each agent among its set of possible plans whose execution does not entail any conflict. This combination of plans is reconstructed dynamically, each time agents decide to change their plans to take into account unpredictable changes in the environment. This not only ensures that new conflicts are likely to be introduced in the new plans that are taken into account but also it allows agents to deal, solely, with the execution of their actions and not with the resolution of conflicts. For this, we use genetic algorithms where the proposed fitness function is defined based on the number of conflicts that agents can experience in each combination of plans. As part of our work, we used a concrete case to illustrate and show the usefulness of our approach.
{"title":"A new approach for coordinating generated agents' plans dynamically","authors":"N. H. Dehimi, Tahar Guerram, Zakaria Tolba","doi":"10.3233/mgs-220304","DOIUrl":"https://doi.org/10.3233/mgs-220304","url":null,"abstract":"In this work, we propose a new approach for coordinating generated agents’ plans dynamically. The purpose is to take into consideration new conflicts introduced in new versions of agents’ plans. The approach consists in finding the best combination which contains one plan for each agent among its set of possible plans whose execution does not entail any conflict. This combination of plans is reconstructed dynamically, each time agents decide to change their plans to take into account unpredictable changes in the environment. This not only ensures that new conflicts are likely to be introduced in the new plans that are taken into account but also it allows agents to deal, solely, with the execution of their actions and not with the resolution of conflicts. For this, we use genetic algorithms where the proposed fitness function is defined based on the number of conflicts that agents can experience in each combination of plans. As part of our work, we used a concrete case to illustrate and show the usefulness of our approach.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76961365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}