In the present scenario, Electrocardiogram (ECG) is an effective non-invasive clinical tool, which reveals the functionality and rhythm of the heart. The non-stationary nature of ECG signal, noise existence, and heartbeat abnormality makes it difficult for clinicians to diagnose arrhythmia. The most of the existing models concentrate only on classification accuracy. In this manuscript, an automated model is introduced that concentrates on arrhythmia type classification using ECG signals, and also focuses on computational complexity and time. After collecting the signals from the MIT-BIH database, the signal transformation and decomposition are performed by Multiscale Local Polynomial Transform (MLPT) and Ensemble Empirical Mode Decomposition (EEMD). The decomposed ECG signals are given to the feature extraction phase for extracting features. The feature extraction phase includes six techniques: standard deviation, zero crossing rate, mean curve length, Hjorth parameters, mean Teager energy, and log energy entropy. Next, the feature dimensionality reduction and arrhythmia classification are performed utilizing the improved Firefly Optimization Algorithm and autoencoder. The selection of optimal feature vectors by the improved Firefly Optimization Algorithm reduces the computational complexity to linear and consumes computational time of 18.23 seconds. The improved Firefly Optimization Algorithm and autoencoder model achieved 98.96% of accuracy in the arrhythmia type classification, which is higher than the comparative models.
{"title":"Analysis and classification of arrhythmia types using improved firefly optimization algorithm and autoencoder model","authors":"Mala Sinnoor, Shanthi Kaliyil Janardhan","doi":"10.3233/mgs-230022","DOIUrl":"https://doi.org/10.3233/mgs-230022","url":null,"abstract":"In the present scenario, Electrocardiogram (ECG) is an effective non-invasive clinical tool, which reveals the functionality and rhythm of the heart. The non-stationary nature of ECG signal, noise existence, and heartbeat abnormality makes it difficult for clinicians to diagnose arrhythmia. The most of the existing models concentrate only on classification accuracy. In this manuscript, an automated model is introduced that concentrates on arrhythmia type classification using ECG signals, and also focuses on computational complexity and time. After collecting the signals from the MIT-BIH database, the signal transformation and decomposition are performed by Multiscale Local Polynomial Transform (MLPT) and Ensemble Empirical Mode Decomposition (EEMD). The decomposed ECG signals are given to the feature extraction phase for extracting features. The feature extraction phase includes six techniques: standard deviation, zero crossing rate, mean curve length, Hjorth parameters, mean Teager energy, and log energy entropy. Next, the feature dimensionality reduction and arrhythmia classification are performed utilizing the improved Firefly Optimization Algorithm and autoencoder. The selection of optimal feature vectors by the improved Firefly Optimization Algorithm reduces the computational complexity to linear and consumes computational time of 18.23 seconds. The improved Firefly Optimization Algorithm and autoencoder model achieved 98.96% of accuracy in the arrhythmia type classification, which is higher than the comparative models.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayyoub Kalache, M. Badri, Farid Mokhati, M. C. Babahenini
Multi-agent systems are proposed as a solution to mitigate nowadays software requirements: open and distributed architectures with dynamic and adaptive behaviour. Like any other software, multi-agent systems development process is error-prone; thus testing is a key activity to ensure the quality of the developed product. This paper sheds light on agent testing as it is the primary artefact for any multi-agent system’s testing process. A framework called JADE Testing Framework (JTF) for JADE platform’s agent testing is proposed. JTF allows testing agents at two levels: unit (inner-components) and agent (agent interactions) levels. JTF is the result of the integration of two testing solutions: JAT a well-known framework for JADE’s agent’s interaction testing and UJade, a new solution that was developed for agent’s unit testing. UJade provides also a toolbox that allows for enhancing JAT capabilities. The evidence of JTF usability and effectiveness in JADE agent testing was supported by an empirical study conducted on seven multi-agent systems. The results of the study show that: when an agent’s code can be tested either at agent or unit levels UJade is less test’s effort consuming than JAT; JTF provides better testing capabilities and the developed tests are more effective than those developed using UJade or JAT alone.
{"title":"A testing framework for JADE agent-based software","authors":"Ayyoub Kalache, M. Badri, Farid Mokhati, M. C. Babahenini","doi":"10.3233/mgs-230023","DOIUrl":"https://doi.org/10.3233/mgs-230023","url":null,"abstract":"Multi-agent systems are proposed as a solution to mitigate nowadays software requirements: open and distributed architectures with dynamic and adaptive behaviour. Like any other software, multi-agent systems development process is error-prone; thus testing is a key activity to ensure the quality of the developed product. This paper sheds light on agent testing as it is the primary artefact for any multi-agent system’s testing process. A framework called JADE Testing Framework (JTF) for JADE platform’s agent testing is proposed. JTF allows testing agents at two levels: unit (inner-components) and agent (agent interactions) levels. JTF is the result of the integration of two testing solutions: JAT a well-known framework for JADE’s agent’s interaction testing and UJade, a new solution that was developed for agent’s unit testing. UJade provides also a toolbox that allows for enhancing JAT capabilities. The evidence of JTF usability and effectiveness in JADE agent testing was supported by an empirical study conducted on seven multi-agent systems. The results of the study show that: when an agent’s code can be tested either at agent or unit levels UJade is less test’s effort consuming than JAT; JTF provides better testing capabilities and the developed tests are more effective than those developed using UJade or JAT alone.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49445783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software reliability refers to the ability of a system to perform its intended function under specified conditions for a specified period of time. The first critical step in the software reliability testing process is to create a Software Operational Profile (SOP). Several methodologies for creating SOP have been proposed. Nonetheless, nearly all the proposed studies have neglected the uniqueness of the new software paradigms, despite the fact that these are generally distinguished by their own concepts and methodologies. One of these software paradigms is multi-agent systems. Rather than using a generic one, it would be more useful to propose a specific methodology for creating SOP. In this paper, we propose a methodology for developing Operational Profile for specific kinds of multi-agent systems (so-called normative multi-agent systems). A detailed case study is used to demonstrate this methodology.
{"title":"Operational profile development methodology for normative multi-agent systems","authors":"Yahia Menassel, Toufik Marir, Farid Mokhati","doi":"10.3233/mgs-221507","DOIUrl":"https://doi.org/10.3233/mgs-221507","url":null,"abstract":"Software reliability refers to the ability of a system to perform its intended function under specified conditions for a specified period of time. The first critical step in the software reliability testing process is to create a Software Operational Profile (SOP). Several methodologies for creating SOP have been proposed. Nonetheless, nearly all the proposed studies have neglected the uniqueness of the new software paradigms, despite the fact that these are generally distinguished by their own concepts and methodologies. One of these software paradigms is multi-agent systems. Rather than using a generic one, it would be more useful to propose a specific methodology for creating SOP. In this paper, we propose a methodology for developing Operational Profile for specific kinds of multi-agent systems (so-called normative multi-agent systems). A detailed case study is used to demonstrate this methodology.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46546817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. S. N. Kanuboyina, T. Shankar, Rama Raju Venkata Penmetsa
In recent decades, the automatic emotion state classification is an important technology for human-machine interactions. In Electroencephalography (EEG) based emotion classification, most of the existing methodologies cannot capture the context information of the EEG signal and ignore the correlation information between dissimilar EEG channels. Therefore, in this study, a deep learning based automatic method is proposed for effective emotion state classification. Firstly, the EEG signals were acquired from the real time and databases for emotion analysis using physiological signals (DEAP), and further, the band-pass filter from 0.3 Hz to 45 Hz is utilized to eliminate both high and low-frequency noise. Next, two feature extraction techniques power spectral density and differential entropy were employed for extracting active feature values, which effectively learn the contextual and spatial information of EEG signals. Finally, principal component analysis and artificial neural network were developed for feature dimensionality reduction and emotion state classification. The experimental evaluation showed that the proposed method achieved 96.38% and 97.36% of accuracy on DEAP, and 92.33% and 89.37% of accuracy on a real-time database for arousal and valence emotion states. The achieved recognition accuracy is higher compared to the support vector machine on both databases.
{"title":"Electroencephalography based human emotion state classification using principal component analysis and artificial neural network","authors":"V. S. N. Kanuboyina, T. Shankar, Rama Raju Venkata Penmetsa","doi":"10.3233/mgs-220333","DOIUrl":"https://doi.org/10.3233/mgs-220333","url":null,"abstract":"In recent decades, the automatic emotion state classification is an important technology for human-machine interactions. In Electroencephalography (EEG) based emotion classification, most of the existing methodologies cannot capture the context information of the EEG signal and ignore the correlation information between dissimilar EEG channels. Therefore, in this study, a deep learning based automatic method is proposed for effective emotion state classification. Firstly, the EEG signals were acquired from the real time and databases for emotion analysis using physiological signals (DEAP), and further, the band-pass filter from 0.3 Hz to 45 Hz is utilized to eliminate both high and low-frequency noise. Next, two feature extraction techniques power spectral density and differential entropy were employed for extracting active feature values, which effectively learn the contextual and spatial information of EEG signals. Finally, principal component analysis and artificial neural network were developed for feature dimensionality reduction and emotion state classification. The experimental evaluation showed that the proposed method achieved 96.38% and 97.36% of accuracy on DEAP, and 92.33% and 89.37% of accuracy on a real-time database for arousal and valence emotion states. The achieved recognition accuracy is higher compared to the support vector machine on both databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"265 1","pages":"263-278"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87069357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A wide variety of uses, such as video interpretation and surveillance, human-robot interaction, healthcare, and sport analysis, among others, make this technology extremely useful, human activity recognition has received a lot of attention in recent decades. human activity recognition from video frames or still images is a challenging procedure because of factors including viewpoint, partial occlusion, lighting, background clutter, scale differences, and look. Numerous applications, including human-computer interfaces, robotics for the analysis of human behavior, and video surveillance systems all require the activity recognition system. This work introduces the human activity recognition system, which includes 3 stages: preprocessing, feature extraction, and classification. The input video (image frames) are subjected for preprocessing stage which is processed with median filtering and background subtraction. Several features, including the Improved Bag of Visual Words, the local texton XOR pattern, and the Spider Local Picture Feature (SLIF) based features, are extracted from the pre-processed image. The next step involves classifying data using a hybrid classifier that blends Bidirectional Gated Recurrent (Bi-GRU) and Long Short Term Memory (LSTM). To boost the effectiveness of the suggested system, the weights of the Long Short Term Memory (LSTM) and Bidirectional Gated Recurrent (Bi-GRU) are both ideally determined using the Improved Aquila Optimization with City Block Distance Evaluation (IACBD) method. Finally, the effectiveness of the suggested approach is evaluated in comparison to other traditional models using various performance metrics.
{"title":"Hybrid classifier model with tuned weights for human activity recognition","authors":"Anshuman Tyagi, Pawan Singh, H. Dev","doi":"10.3233/mgs-220328","DOIUrl":"https://doi.org/10.3233/mgs-220328","url":null,"abstract":"A wide variety of uses, such as video interpretation and surveillance, human-robot interaction, healthcare, and sport analysis, among others, make this technology extremely useful, human activity recognition has received a lot of attention in recent decades. human activity recognition from video frames or still images is a challenging procedure because of factors including viewpoint, partial occlusion, lighting, background clutter, scale differences, and look. Numerous applications, including human-computer interfaces, robotics for the analysis of human behavior, and video surveillance systems all require the activity recognition system. This work introduces the human activity recognition system, which includes 3 stages: preprocessing, feature extraction, and classification. The input video (image frames) are subjected for preprocessing stage which is processed with median filtering and background subtraction. Several features, including the Improved Bag of Visual Words, the local texton XOR pattern, and the Spider Local Picture Feature (SLIF) based features, are extracted from the pre-processed image. The next step involves classifying data using a hybrid classifier that blends Bidirectional Gated Recurrent (Bi-GRU) and Long Short Term Memory (LSTM). To boost the effectiveness of the suggested system, the weights of the Long Short Term Memory (LSTM) and Bidirectional Gated Recurrent (Bi-GRU) are both ideally determined using the Improved Aquila Optimization with City Block Distance Evaluation (IACBD) method. Finally, the effectiveness of the suggested approach is evaluated in comparison to other traditional models using various performance metrics.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"10 1","pages":"317-344"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88386426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a new approach for coordinating generated agents’ plans dynamically. The purpose is to take into consideration new conflicts introduced in new versions of agents’ plans. The approach consists in finding the best combination which contains one plan for each agent among its set of possible plans whose execution does not entail any conflict. This combination of plans is reconstructed dynamically, each time agents decide to change their plans to take into account unpredictable changes in the environment. This not only ensures that new conflicts are likely to be introduced in the new plans that are taken into account but also it allows agents to deal, solely, with the execution of their actions and not with the resolution of conflicts. For this, we use genetic algorithms where the proposed fitness function is defined based on the number of conflicts that agents can experience in each combination of plans. As part of our work, we used a concrete case to illustrate and show the usefulness of our approach.
{"title":"A new approach for coordinating generated agents' plans dynamically","authors":"N. H. Dehimi, Tahar Guerram, Zakaria Tolba","doi":"10.3233/mgs-220304","DOIUrl":"https://doi.org/10.3233/mgs-220304","url":null,"abstract":"In this work, we propose a new approach for coordinating generated agents’ plans dynamically. The purpose is to take into consideration new conflicts introduced in new versions of agents’ plans. The approach consists in finding the best combination which contains one plan for each agent among its set of possible plans whose execution does not entail any conflict. This combination of plans is reconstructed dynamically, each time agents decide to change their plans to take into account unpredictable changes in the environment. This not only ensures that new conflicts are likely to be introduced in the new plans that are taken into account but also it allows agents to deal, solely, with the execution of their actions and not with the resolution of conflicts. For this, we use genetic algorithms where the proposed fitness function is defined based on the number of conflicts that agents can experience in each combination of plans. As part of our work, we used a concrete case to illustrate and show the usefulness of our approach.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"4 1","pages":"219-239"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76961365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Education is developing very fast with the advancement of technology and the process of the smart era. One can store all educational certificates and credentials in the form of an electronic wallet or a folder. By using this electronic transformation of certificates, users can transfer the certificates from one place to another very easily. The “data island” phenomenon, central data storing, confidentiality, reduced security, and integrity are common problems of electronic data transfer. This study presents a safe sharing of digital documents which uses blockchain technology and an attributed-based cryptosystem to offer a creative solution to the abovementioned issues. The proposed scheme uses Ethereum smart contracts and achieves fine-grain access control by using attribute-based encryption. Finally, we verified our model using the test network and compared the performance with some existing state-of-arts. The results of proposed scheme generated by simulations are more feasible and effective in challenging environments.
{"title":"Secure digital documents sharing using blockchain and attribute-based cryptosystem","authors":"G. Verma, Soumen Kanrar","doi":"10.3233/mgs-221361","DOIUrl":"https://doi.org/10.3233/mgs-221361","url":null,"abstract":"Education is developing very fast with the advancement of technology and the process of the smart era. One can store all educational certificates and credentials in the form of an electronic wallet or a folder. By using this electronic transformation of certificates, users can transfer the certificates from one place to another very easily. The “data island” phenomenon, central data storing, confidentiality, reduced security, and integrity are common problems of electronic data transfer. This study presents a safe sharing of digital documents which uses blockchain technology and an attributed-based cryptosystem to offer a creative solution to the abovementioned issues. The proposed scheme uses Ethereum smart contracts and achieves fine-grain access control by using attribute-based encryption. Finally, we verified our model using the test network and compared the performance with some existing state-of-arts. The results of proposed scheme generated by simulations are more feasible and effective in challenging environments.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"18 1","pages":"365-379"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87481817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arrhythmia classification on Electrocardiogram (ECG) signals is an important process for the diagnosis of cardiac disease and arrhythmia disease. The existing researches in arrhythmia classification have limitations of imbalance data problem and overfitting in classification. This research applies Fuzzy C-Means (FCM) – Enhanced Tolerance-based Intuitionistic Fuzzy Rough Set Theory (ETIFRST) for feature selection in arrhythmia classification. The selected features from FCM-ETIFRST were applied to the Multi-class Support Vector Machine (MSVM) for arrhythmia classification. The ResNet18 – Convolution Neural Network (CNN) was applied for feature extraction in input signal to overcome imbalance data problem. Conventional feature extraction along with CNN features are applied for FCM-ETIFRST feature selection process. The FCM-ETIFRST method in arrhythmia classification is evaluated on MIT-BIH and CPCS 2018 dataset. The FCM-ETIFRST has 98.95% accuracy and Focal loss-CNN has 98.66% accuracy on MIT-BIH dataset. The FCM-ETIFRST method has 98.45% accuracy and Explainable Deep learning Model (XDM) method have 93.6% accuracy on CPCS 2018 dataset.
{"title":"Enhanced tolerance-based intuitionistic fuzzy rough set theory feature selection and ResNet-18 feature extraction model for arrhythmia classification","authors":"M. Rajeshwari, K. Kavitha","doi":"10.3233/mgs-220317","DOIUrl":"https://doi.org/10.3233/mgs-220317","url":null,"abstract":"Arrhythmia classification on Electrocardiogram (ECG) signals is an important process for the diagnosis of cardiac disease and arrhythmia disease. The existing researches in arrhythmia classification have limitations of imbalance data problem and overfitting in classification. This research applies Fuzzy C-Means (FCM) – Enhanced Tolerance-based Intuitionistic Fuzzy Rough Set Theory (ETIFRST) for feature selection in arrhythmia classification. The selected features from FCM-ETIFRST were applied to the Multi-class Support Vector Machine (MSVM) for arrhythmia classification. The ResNet18 – Convolution Neural Network (CNN) was applied for feature extraction in input signal to overcome imbalance data problem. Conventional feature extraction along with CNN features are applied for FCM-ETIFRST feature selection process. The FCM-ETIFRST method in arrhythmia classification is evaluated on MIT-BIH and CPCS 2018 dataset. The FCM-ETIFRST has 98.95% accuracy and Focal loss-CNN has 98.66% accuracy on MIT-BIH dataset. The FCM-ETIFRST method has 98.45% accuracy and Explainable Deep learning Model (XDM) method have 93.6% accuracy on CPCS 2018 dataset.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"32 1","pages":"241-261"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85268896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dr. Chandra Sekhar Kolli, Nihar M. Ranjan, Dharani Kumar Talapula, Vikram S. Gawali, S. Biswas
The tremendous development and rapid evolution in computing advancements has urged a lot of organizations to expand their data as well as computational needs. Such type of services offers security concepts like confidentiality, integrity, and availability. Thus, a highly secured domain is the fundamental need of cloud environments. In addition, security breaches are also growing equally in the cloud because of the sophisticated services of the cloud, which cannot be mitigated efficiently through firewall rules and packet filtering methods. In order to mitigate the malicious attacks and to detect the malicious behavior with high detection accuracy, an effective strategy named Multiverse Fractional Calculus (MFC) based hybrid deep learning approach is proposed. Here, two network classifiers namely Hierarchical Attention Network (HAN) and Random Multimodel Deep Learning (RMDL) are employed to detect the presence of malicious behavior. The network classifier is trained by exploiting proposed MFC, which is an integration of multi-verse optimizer and fractional calculus. The proposed MFC-based hybrid deep learning approach has attained superior results with utmost testing sensitivity, accuracy, and specificity of 0.949, 0.939, and 0.947.
{"title":"Multiverse fractional calculus based hybrid deep learning and fusion approach for detecting malicious behavior in cloud computing environment","authors":"Dr. Chandra Sekhar Kolli, Nihar M. Ranjan, Dharani Kumar Talapula, Vikram S. Gawali, S. Biswas","doi":"10.3233/mgs-220214","DOIUrl":"https://doi.org/10.3233/mgs-220214","url":null,"abstract":"The tremendous development and rapid evolution in computing advancements has urged a lot of organizations to expand their data as well as computational needs. Such type of services offers security concepts like confidentiality, integrity, and availability. Thus, a highly secured domain is the fundamental need of cloud environments. In addition, security breaches are also growing equally in the cloud because of the sophisticated services of the cloud, which cannot be mitigated efficiently through firewall rules and packet filtering methods. In order to mitigate the malicious attacks and to detect the malicious behavior with high detection accuracy, an effective strategy named Multiverse Fractional Calculus (MFC) based hybrid deep learning approach is proposed. Here, two network classifiers namely Hierarchical Attention Network (HAN) and Random Multimodel Deep Learning (RMDL) are employed to detect the presence of malicious behavior. The network classifier is trained by exploiting proposed MFC, which is an integration of multi-verse optimizer and fractional calculus. The proposed MFC-based hybrid deep learning approach has attained superior results with utmost testing sensitivity, accuracy, and specificity of 0.949, 0.939, and 0.947.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"18 1","pages":"193-217"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74733219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the application of multi-objective optimisation analytic methodologies to goal models in this research, with the intention of providing various benefits beyond the initial modelling act. Optimisation analysis can be used by modellers to evaluate goal satisfaction, evaluate high-level design alternatives, aid analysts in deciding on high-level requirements and system design, verify the sanity of a model, and improve communication and learning. Goal model analysis may be done in a variety of methods, depending on the nature of the model and the study’s goal. In our work, we use the Goal-Oriented Requirement Language (GRL), which is part of the User Requirements Notation (URN), a new International Telecommunication Union (ITU) recommendation that offers the first standard goal-oriented language. Existing optimisation methods are geared towards maximising objective functions. On the other hand, real-world problems necessitate simultaneous optimisation of both maximising and minimising objective functions. This work explores a GRL model analysis that may accommodate the conflicting goals of various inter-dependent actors in a goal model using the Analytic Hierarchy Process (AHP). By evaluating the qualitative or quantitative satisfaction levels of the actors and intentional elements (e.g., objectives and tasks) that make up the model, we construct a multi-objective optimisation method for analysis using the GRL model. The proposed hybrid technique evaluates the contribution of alternatives to the accomplishment of top softgoals. It is then integrated with the top softgoals’ normalised relative priority values. The integration result may be utilised to assess multiple alternatives based on the requirements problem. Although the URN standard does not mandate a specific propagation algorithm, it does outline certain criteria for developing evaluation mechanisms. Case studies were used to assess the viability of the suggested approach in a simulated environment using JAVA Eclipse and IBM Cplex. The findings revealed that the proposed method can be used to analyse goals in goal models with opposing multi-objective functions.
{"title":"Goal-oriented requirement language model analysis using analytic hierarchy process","authors":"Sreenithya Sumesh, A. Krishna, R.Z. ITU-T","doi":"10.3233/mgs-220242","DOIUrl":"https://doi.org/10.3233/mgs-220242","url":null,"abstract":"We present the application of multi-objective optimisation analytic methodologies to goal models in this research, with the intention of providing various benefits beyond the initial modelling act. Optimisation analysis can be used by modellers to evaluate goal satisfaction, evaluate high-level design alternatives, aid analysts in deciding on high-level requirements and system design, verify the sanity of a model, and improve communication and learning. Goal model analysis may be done in a variety of methods, depending on the nature of the model and the study’s goal. In our work, we use the Goal-Oriented Requirement Language (GRL), which is part of the User Requirements Notation (URN), a new International Telecommunication Union (ITU) recommendation that offers the first standard goal-oriented language. Existing optimisation methods are geared towards maximising objective functions. On the other hand, real-world problems necessitate simultaneous optimisation of both maximising and minimising objective functions. This work explores a GRL model analysis that may accommodate the conflicting goals of various inter-dependent actors in a goal model using the Analytic Hierarchy Process (AHP). By evaluating the qualitative or quantitative satisfaction levels of the actors and intentional elements (e.g., objectives and tasks) that make up the model, we construct a multi-objective optimisation method for analysis using the GRL model. The proposed hybrid technique evaluates the contribution of alternatives to the accomplishment of top softgoals. It is then integrated with the top softgoals’ normalised relative priority values. The integration result may be utilised to assess multiple alternatives based on the requirements problem. Although the URN standard does not mandate a specific propagation algorithm, it does outline certain criteria for developing evaluation mechanisms. Case studies were used to assess the viability of the suggested approach in a simulated environment using JAVA Eclipse and IBM Cplex. The findings revealed that the proposed method can be used to analyse goals in goal models with opposing multi-objective functions.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":"18 1","pages":"295-316"},"PeriodicalIF":0.7,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78017869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}