Pub Date : 2023-11-10DOI: 10.1080/1206212x.2023.2277966
S. Regilan, L.K. Hema
AbstractIn our study, we introduce an advanced clustering method designed for IoT-based environmental monitoring. We’ve combined two powerful techniques, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Genetic Algorithms (GA), to create a specialized approach called EC-GAD (Enhanced-Clustering using Genetic Algorithms and DBSCAN). This integrated system model relies on DBSCAN, a robust clustering algorithm capable of handling irregular shapes and varying data densities, to group sensor nodes based on their physical proximity. To improve clustering performance, we’ve harnessed Genetic Algorithms to optimize the parameters of DBSCAN. Through a repetitive process involving selection, crossover, and mutation, GA refines parameter settings based on the quality of environmental clustering as assessed by fitness metrics. Our approach is tailored specifically for IoT deployments in environmental monitoring, which involve collecting data from sensor nodes and integrating DBSCAN and GA. We’ve paid special attention to choosing an appropriate distance metric and fine-tuning DBSCAN parameters such as epsilon (ε) and minPts to match the unique needs of environmental monitoring applications. Furthermore, we’ve taken energy efficiency into account by implementing energy-aware node selection and optimizing cluster formation to minimize energy consumption.KEYWORDS: Environmental monitoringIoTclusteringDBSCANgenetic algorithms Disclosure statementNo potential conflict of interest was reported by the author(s).Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Data availability statementData sharing does not apply to this article as no new data has been created or analyzed in this study.Additional informationNotes on contributorsS. RegilanMr. S. Regilan working as a Research Scholar in the Department of Electronics and Communication Engineering. He has a track record of successful teaching, education reform and has been teaching Students for decades. He Completed his B.E in Electronics and Communication Engineering Department, in Bharath Niketan Engineering College, Anna University on 2011; M.E in Electronics and Communication Engineering Department, in Aarupadai Veedu Institute of Technology, Vinayaka Missions Research Foundation, Chennai on 2015. Pursuing Ph.D in Department of Electronics and Communication Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Missions Research Foundation, Chennai. He worked various recognized Institutions from 2011. He had 10+ years of academic experiences in the field of Electronics and Communication Engineering. He is member in various professional bodies like ISTE, IEEE societies. He participated and Presented many International & National Conferences/Workshop/Seminar/ Webinar in the field of Electronics and Communication Engineering. He published and indexed 5 papers in reputed journals under Scopus with good citations indexed. Ma
{"title":"Optimizing environmental monitoring in IoT: integrating DBSCAN with genetic algorithms for enhanced clustering","authors":"S. Regilan, L.K. Hema","doi":"10.1080/1206212x.2023.2277966","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2277966","url":null,"abstract":"AbstractIn our study, we introduce an advanced clustering method designed for IoT-based environmental monitoring. We’ve combined two powerful techniques, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Genetic Algorithms (GA), to create a specialized approach called EC-GAD (Enhanced-Clustering using Genetic Algorithms and DBSCAN). This integrated system model relies on DBSCAN, a robust clustering algorithm capable of handling irregular shapes and varying data densities, to group sensor nodes based on their physical proximity. To improve clustering performance, we’ve harnessed Genetic Algorithms to optimize the parameters of DBSCAN. Through a repetitive process involving selection, crossover, and mutation, GA refines parameter settings based on the quality of environmental clustering as assessed by fitness metrics. Our approach is tailored specifically for IoT deployments in environmental monitoring, which involve collecting data from sensor nodes and integrating DBSCAN and GA. We’ve paid special attention to choosing an appropriate distance metric and fine-tuning DBSCAN parameters such as epsilon (ε) and minPts to match the unique needs of environmental monitoring applications. Furthermore, we’ve taken energy efficiency into account by implementing energy-aware node selection and optimizing cluster formation to minimize energy consumption.KEYWORDS: Environmental monitoringIoTclusteringDBSCANgenetic algorithms Disclosure statementNo potential conflict of interest was reported by the author(s).Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Data availability statementData sharing does not apply to this article as no new data has been created or analyzed in this study.Additional informationNotes on contributorsS. RegilanMr. S. Regilan working as a Research Scholar in the Department of Electronics and Communication Engineering. He has a track record of successful teaching, education reform and has been teaching Students for decades. He Completed his B.E in Electronics and Communication Engineering Department, in Bharath Niketan Engineering College, Anna University on 2011; M.E in Electronics and Communication Engineering Department, in Aarupadai Veedu Institute of Technology, Vinayaka Missions Research Foundation, Chennai on 2015. Pursuing Ph.D in Department of Electronics and Communication Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Missions Research Foundation, Chennai. He worked various recognized Institutions from 2011. He had 10+ years of academic experiences in the field of Electronics and Communication Engineering. He is member in various professional bodies like ISTE, IEEE societies. He participated and Presented many International & National Conferences/Workshop/Seminar/ Webinar in the field of Electronics and Communication Engineering. He published and indexed 5 papers in reputed journals under Scopus with good citations indexed. Ma","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"75 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135092732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1080/1206212x.2023.2279811
Sembati Yassine, Naja Najib, Jamali Abdellah
AbstractNamed Data Networking (NDN) has emerged as a promising information-centric networking paradigm that addresses the limitations of the traditional IP-based Internet architecture. The core principle of NDN relies on content naming instead of host addressing, to provide efficient, secure, and scalable content delivery. Routing is a critical component of NDN and is responsible for discovering and maintaining optimal paths to named content. This paper presents a comprehensive review of routing techniques in NDN, focusing on the design principles, algorithms, and performance metrics, especially in wired network architecture. We first summarize the NDN architecture and discuss its key components. We then delve into the fundamental routing concepts in NDN and categorize and examine various routing techniques, including link state, distance vector, and centralized approaches based on Software Defined Network. We also summarize the relevant research efforts proposed to address NDN routing challenges by focusing more on wired network architecture. Finally, we identify open research issues and future directions in NDN routing, emphasizing the need for scalable, efficient, and secure routing techniques that can fulfill the growing demands of the modern Internet. In conclusion, this review serves as a valuable reference for researchers and practitioners in NDN, offering a comprehensive understanding of the current state-of-the-art routing techniques, limitations, and potential future advancements.KEYWORDS: Software defined networkroutingnamed data networkscalabilittyoverheadwired network AcknowledgementsMy profound gratitude goes out to our mentors, Pr. Jamali Abdellah and Naja Najib, for their essential advice and assistance during the study process. I would especially like to thank my parents for their insightful advice. I appreciate the unwavering support and love of my family and friends. Finally, I would like to express my gratitude for the assistance and cooperation of the entire Department of Mathematics, Informatique, and Networks team.Disclosure statementNo potential conflict of interest was reported by the author(s).
{"title":"Routing approaches in named data network: a survey and emerging research challenges","authors":"Sembati Yassine, Naja Najib, Jamali Abdellah","doi":"10.1080/1206212x.2023.2279811","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2279811","url":null,"abstract":"AbstractNamed Data Networking (NDN) has emerged as a promising information-centric networking paradigm that addresses the limitations of the traditional IP-based Internet architecture. The core principle of NDN relies on content naming instead of host addressing, to provide efficient, secure, and scalable content delivery. Routing is a critical component of NDN and is responsible for discovering and maintaining optimal paths to named content. This paper presents a comprehensive review of routing techniques in NDN, focusing on the design principles, algorithms, and performance metrics, especially in wired network architecture. We first summarize the NDN architecture and discuss its key components. We then delve into the fundamental routing concepts in NDN and categorize and examine various routing techniques, including link state, distance vector, and centralized approaches based on Software Defined Network. We also summarize the relevant research efforts proposed to address NDN routing challenges by focusing more on wired network architecture. Finally, we identify open research issues and future directions in NDN routing, emphasizing the need for scalable, efficient, and secure routing techniques that can fulfill the growing demands of the modern Internet. In conclusion, this review serves as a valuable reference for researchers and practitioners in NDN, offering a comprehensive understanding of the current state-of-the-art routing techniques, limitations, and potential future advancements.KEYWORDS: Software defined networkroutingnamed data networkscalabilittyoverheadwired network AcknowledgementsMy profound gratitude goes out to our mentors, Pr. Jamali Abdellah and Naja Najib, for their essential advice and assistance during the study process. I would especially like to thank my parents for their insightful advice. I appreciate the unwavering support and love of my family and friends. Finally, I would like to express my gratitude for the assistance and cooperation of the entire Department of Mathematics, Informatique, and Networks team.Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"27 31","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135391922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1080/1206212x.2023.2277554
Laila Bouhouch, Mostapha Zbakh, Claude Tadonki
AbstractDue to the increasing volume of data for applications running on geographically distributed Cloud systems, the need for efficient data management has emerged as a crucial performance factor. Alongside basic task scheduling, the management of input data on distributed Cloud systems has become a genuine challenge, particularly with data-intensive applications. Ideally, each dataset should be stored in the same data center as its consumer tasks so as to lead to local data accesses only. However, when a given task does not need all items within one of its input datasets, sending that dataset entirely might lead to a severe time overhead. To address this concern, a data fragmentation strategy can be considered in order to partition the datasets and process them in that form. Such a strategy should be flexible enough to support any user-defined partitioning, and suitable enough to minimize the overhead of transferring the data in their fragmented form. To simulate and estimate the basic statistics of both fragmentation and migration mechanisms prior to an implementation in a real Cloud, we chose Cloudsim, with the goal of enhancing it with the corresponding extensions. Cloudsim is a popular simulator for Cloud Computing investigations. Our proposed extension is named DFMCloudsim, its goal is to provide an efficient module for implementing fragmentation and data migration strategies. We validate our extension using various simulated scenarios. The results indicate that our extension effectively achieves its main objectives and can reduce data transfer overhead by 74.75% compared to our previous work.Keywords: Cloud computingbig datacloudsimdata fragmentationdata migration AcknowledgmentsL. B.: prepared the manuscript, and performed analysis and experiments. M. Z., C. T.: helped in the initial solution design. All authors reviewed the paper and approved the final version of the manuscript.Availability of data and materialsAll of the material is owned by the authors and can be accessed by email request.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsLaila BouhouchLaila Bouhouch received her engineer degree in Computer Science at ENSA (National School of Applied Sciences) at Ibn Zohr University, Agadir, Morocco, in 2017. She is currently a Ph.D. student in the Department of Computer Science, Laboratory CEDOC ST2I, ENSIAS, Rabat, Morocco. Her research interests include big data management in workflow systems, cloud computing and distributed systems.Mostapha ZbakhMostapha Zbakh received his Ph.D. in computer sciences from Polytechnic Faculty of Mons, Belgium, in 2001. He is currently a Professor at ENSIAS (National School of Computer Science and System Analysis) at Mohammed V University, Rabat, Morocco, since 2002. His research interests include load balancing, parallel and distributed systems, HPC, Big data and Cloud computing.Claude TadonkiClaude Tadonki currently holds
{"title":"DFMCloudsim: an extension of cloudsim for modeling and simulation of data fragments migration over distributed data centers","authors":"Laila Bouhouch, Mostapha Zbakh, Claude Tadonki","doi":"10.1080/1206212x.2023.2277554","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2277554","url":null,"abstract":"AbstractDue to the increasing volume of data for applications running on geographically distributed Cloud systems, the need for efficient data management has emerged as a crucial performance factor. Alongside basic task scheduling, the management of input data on distributed Cloud systems has become a genuine challenge, particularly with data-intensive applications. Ideally, each dataset should be stored in the same data center as its consumer tasks so as to lead to local data accesses only. However, when a given task does not need all items within one of its input datasets, sending that dataset entirely might lead to a severe time overhead. To address this concern, a data fragmentation strategy can be considered in order to partition the datasets and process them in that form. Such a strategy should be flexible enough to support any user-defined partitioning, and suitable enough to minimize the overhead of transferring the data in their fragmented form. To simulate and estimate the basic statistics of both fragmentation and migration mechanisms prior to an implementation in a real Cloud, we chose Cloudsim, with the goal of enhancing it with the corresponding extensions. Cloudsim is a popular simulator for Cloud Computing investigations. Our proposed extension is named DFMCloudsim, its goal is to provide an efficient module for implementing fragmentation and data migration strategies. We validate our extension using various simulated scenarios. The results indicate that our extension effectively achieves its main objectives and can reduce data transfer overhead by 74.75% compared to our previous work.Keywords: Cloud computingbig datacloudsimdata fragmentationdata migration AcknowledgmentsL. B.: prepared the manuscript, and performed analysis and experiments. M. Z., C. T.: helped in the initial solution design. All authors reviewed the paper and approved the final version of the manuscript.Availability of data and materialsAll of the material is owned by the authors and can be accessed by email request.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsLaila BouhouchLaila Bouhouch received her engineer degree in Computer Science at ENSA (National School of Applied Sciences) at Ibn Zohr University, Agadir, Morocco, in 2017. She is currently a Ph.D. student in the Department of Computer Science, Laboratory CEDOC ST2I, ENSIAS, Rabat, Morocco. Her research interests include big data management in workflow systems, cloud computing and distributed systems.Mostapha ZbakhMostapha Zbakh received his Ph.D. in computer sciences from Polytechnic Faculty of Mons, Belgium, in 2001. He is currently a Professor at ENSIAS (National School of Computer Science and System Analysis) at Mohammed V University, Rabat, Morocco, since 2002. His research interests include load balancing, parallel and distributed systems, HPC, Big data and Cloud computing.Claude TadonkiClaude Tadonki currently holds ","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"13 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AbstractA crucial part is played by malware detection and classification in ensuring the safety and security of computer systems. In this work, a comprehensive study has been presented for the classification of harmful or malware images that uses a Convolutional Neural Network (CNN) which has been finely tuned and its performance has been compared with five pre-trained models: ResNet50, InceptionResNetV2, VGG16, Xception and InceptionV3. The suggested CNN framework has been trained using the dataset MalImg_9010, consisting of 9,376 grayscale images resized to 128 × 128 pixels. The models have been evaluated based on their F1 score, recall, precision, and accuracy. The experiments that were conducted demonstrate that the fine-tuned CNN model achieves an impressive 0.965 as the F1 score and a 95.57% accuracy. Furthermore, the comparison with pre-trained models reveals the dominance of the presented framework concerning the F1 score and accuracy. The output of the conducted simulation suggests that the fine-tuned CNN approach shows promise for accurate malware image classification. Additionally, the paper discusses potential improvements, such as increasing the number of training epochs and incorporating larger and more diverse malware datasets, including RGB images and a broader range of malware families. The current research article gives valuable observations on various models’ effectiveness for classifying malware images and highlights the future scopes for research incorporating this domain.KEYWORDS: Malware image classificationdata privacydata protectionartificial intelligencedeep learning Disclosure statementThe authors declare that they have no known competing financial or personal relationships that could be viewed as influencing the work reported in this paper. On behalf of all authors, the corresponding author states that there is no conflict of interest.
{"title":"Malware image classification: comparative analysis of a fine-tuned CNN and pre-trained models","authors":"Santosh Kumar Majhi, Abhipsa Panda, Suresh Kumar Srichandan, Usha Desai, Biswaranjan Acharya","doi":"10.1080/1206212x.2023.2270804","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2270804","url":null,"abstract":"AbstractA crucial part is played by malware detection and classification in ensuring the safety and security of computer systems. In this work, a comprehensive study has been presented for the classification of harmful or malware images that uses a Convolutional Neural Network (CNN) which has been finely tuned and its performance has been compared with five pre-trained models: ResNet50, InceptionResNetV2, VGG16, Xception and InceptionV3. The suggested CNN framework has been trained using the dataset MalImg_9010, consisting of 9,376 grayscale images resized to 128 × 128 pixels. The models have been evaluated based on their F1 score, recall, precision, and accuracy. The experiments that were conducted demonstrate that the fine-tuned CNN model achieves an impressive 0.965 as the F1 score and a 95.57% accuracy. Furthermore, the comparison with pre-trained models reveals the dominance of the presented framework concerning the F1 score and accuracy. The output of the conducted simulation suggests that the fine-tuned CNN approach shows promise for accurate malware image classification. Additionally, the paper discusses potential improvements, such as increasing the number of training epochs and incorporating larger and more diverse malware datasets, including RGB images and a broader range of malware families. The current research article gives valuable observations on various models’ effectiveness for classifying malware images and highlights the future scopes for research incorporating this domain.KEYWORDS: Malware image classificationdata privacydata protectionartificial intelligencedeep learning Disclosure statementThe authors declare that they have no known competing financial or personal relationships that could be viewed as influencing the work reported in this paper. On behalf of all authors, the corresponding author states that there is no conflict of interest.","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"78 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135976060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1080/1206212x.2023.2275084
Wathiq Laftah Al-Yaseen, Ali Kadhum Idrees
AbstractIn recent years, deep learning techniques have achieved significant results in several fields, like computer vision, speech recognition, bioinformatics, medical image analysis, and natural language processing. On the other hand, deep learning for intrusion detection has been widely used, particularly the implementation of convolutional neural networks (CNN), multilayer perceptron (MLP), and autoencoders (AE) to classify normal and abnormal. In this article, we propose a multi-level deep learning approach (MuDeLA) for intrusion detection systems (IDS). The MuDeLA is based on CNN and MLP to enhance the performance of detecting attacks in the IDS. The MuDeLA is evaluated by using various well-known benchmark datasets like KDDCup'99, NSL-KDD, and UNSW-NB15 in order to expand the comparison with different related work results. The outcomes show that the proposed MuDeLA achieves high efficiency for multiclass classification compared with the other methods, where the accuracy reaches 95.55% for KDDCup'99, 88.12% for NSL-KDD, and 90.52% for UNSW-NB15.Keywords: Intrusion detection systemmultilevel learning modeldeep learningconvolution neural networkmultilayer perceptron Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsWathiq Laftah Al-YaseenWathiq Laftah Al-Yaseen is currently a Lecturer in the Department of Computer Systems Techniques at Kerbala Technical Institute in Al-Furat Al-Awsat Technical University, Kerbala, Iraq. He received his Master of Computer Science from the University of Babylon, Iraq. He received his PhD of Computer Science from FTSM/UKM, Malaysia. His research interests include artificial intelligence, network security, machine learning, data mining and bioinformatics.Ali Kadhum IdreesAli Kadhum Idrees received his BSc and MSc in Computer Science from the University of Babylon, Iraq in 2000 and 2003 respectively. He received his PhD in Computer Science (wireless networks) in 2015 from the University of Franche-Comte (UFC), France. He is currently an Assistant Professor in Computer Science at the University of Babylon, Iraq. He has several research papers in wireless sensor networks (WSNs) and computer networks. His research interests include wireless networks, WSNs, SDN, IoT, distributed computing, data mining and optimisation in communication networks.
{"title":"MuDeLA: multi-level deep learning approach for intrusion detection systems","authors":"Wathiq Laftah Al-Yaseen, Ali Kadhum Idrees","doi":"10.1080/1206212x.2023.2275084","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2275084","url":null,"abstract":"AbstractIn recent years, deep learning techniques have achieved significant results in several fields, like computer vision, speech recognition, bioinformatics, medical image analysis, and natural language processing. On the other hand, deep learning for intrusion detection has been widely used, particularly the implementation of convolutional neural networks (CNN), multilayer perceptron (MLP), and autoencoders (AE) to classify normal and abnormal. In this article, we propose a multi-level deep learning approach (MuDeLA) for intrusion detection systems (IDS). The MuDeLA is based on CNN and MLP to enhance the performance of detecting attacks in the IDS. The MuDeLA is evaluated by using various well-known benchmark datasets like KDDCup'99, NSL-KDD, and UNSW-NB15 in order to expand the comparison with different related work results. The outcomes show that the proposed MuDeLA achieves high efficiency for multiclass classification compared with the other methods, where the accuracy reaches 95.55% for KDDCup'99, 88.12% for NSL-KDD, and 90.52% for UNSW-NB15.Keywords: Intrusion detection systemmultilevel learning modeldeep learningconvolution neural networkmultilayer perceptron Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsWathiq Laftah Al-YaseenWathiq Laftah Al-Yaseen is currently a Lecturer in the Department of Computer Systems Techniques at Kerbala Technical Institute in Al-Furat Al-Awsat Technical University, Kerbala, Iraq. He received his Master of Computer Science from the University of Babylon, Iraq. He received his PhD of Computer Science from FTSM/UKM, Malaysia. His research interests include artificial intelligence, network security, machine learning, data mining and bioinformatics.Ali Kadhum IdreesAli Kadhum Idrees received his BSc and MSc in Computer Science from the University of Babylon, Iraq in 2000 and 2003 respectively. He received his PhD in Computer Science (wireless networks) in 2015 from the University of Franche-Comte (UFC), France. He is currently an Assistant Professor in Computer Science at the University of Babylon, Iraq. He has several research papers in wireless sensor networks (WSNs) and computer networks. His research interests include wireless networks, WSNs, SDN, IoT, distributed computing, data mining and optimisation in communication networks.","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"116 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135325829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.1080/1206212x.2023.2276553
Muhammad Sam'an, Muhammad Munsarif, None Safuan, Yahya Nur Ifriza
{"title":"Feature selection in P2P lending based on hybrid genetic algorithm with machine learning","authors":"Muhammad Sam'an, Muhammad Munsarif, None Safuan, Yahya Nur Ifriza","doi":"10.1080/1206212x.2023.2276553","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2276553","url":null,"abstract":"","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"14 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135871582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1080/1206212x.2023.2273585
Neelam Rawat, Vikas Somani, Arun Kr. Tripathi
AbstractSoftware regression testing is an essential testing practice that ensures that changes made to the source code of an application do not affect its functionality and quality. Within this research, we introduce a novel method for prioritizing software test cases using a fusion of reinforcement learning and hidden Markov model to enhance the efficiency of the testing process. The primary objective of this research paper is to maximize the likelihood of selecting test cases that have the highest priority of uncovering defects in new code changes introduced into the codebase. To assess the efficacy of our suggested methodology, we experimented on the test cases of five web applications. Our results demonstrate that our proposed approach can accurately identify critical test cases while minimizing false positives, as evidenced by an F1 score of 0.849. This outcome can help prioritize testing efforts, saving time, and resources while improving the overall efficiency of the testing process.Keywords: Regression testingtest case prioritization (TCP)hidden Markov model (HMM)reinforcement learning (RL) Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsNeelam RawatMs. Neelam Rawat is a dedicated research scholar in the field of Computer Science & Engineering at Sangam University. With an extensive portfolio that includes over 15 publications, 3 patents, and 2 authored books, she is actively engaged in pioneering research. Her primary areas of expertise lie in the domains of machine learning, deep learning, software testing, software engineering, quality assurance, and management.Vikas SomaniDr. Vikas Somani (PhD, M.Tech, MCA,BCA) has more than 16 years of Teaching and Industrial Experience. Currently he is Associate Professor and Assistant Dean, School of Engineering and Technology at the Sangam University, Bhilwara. He has diversified research interests in the areas of Cloud Computing, Artificial Intelligence, Machine Learning, Block chain and Internet of Things (IoT). He is a Member of IEEE, CSI, IAENG, ACM, IRED. He has published over 35 Research Paper in International, National Journal and Conferences and attended around 50 Workshops and STP. He has also Supervised/Guided more than 20 Research Work. Currently, under his 6 research scholars are working. He has Three Patent awarded and granted/design one from Government of India Patent Office and another from Germany Patent Office. He has also published Five Patents.Arun Kr. TripathiDr. Arun Kr. Tripathi has more than 21 years of Teaching experience and completed Ph.D. in Computer Applications with specialization in Wireless Networks. Presently he is appointed as Head of Computer Applications with and an additional responsibility of Head Cyber Security and Forensic Science Division. His major research interests are Computer Network, Network Security, IoT, Machine Learning etc. with over 70 published works in reputed Journal
摘要软件回归测试是一种重要的测试实践,它确保对应用程序源代码所做的更改不会影响其功能和质量。在本研究中,我们引入了一种新的方法,利用强化学习和隐马尔可夫模型的融合来确定软件测试用例的优先级,以提高测试过程的效率。这篇研究论文的主要目标是最大化选择测试用例的可能性,这些测试用例在发现引入代码库的新代码变更中的缺陷方面具有最高的优先级。为了评估我们建议的方法的有效性,我们在五个web应用程序的测试用例上进行了实验。我们的结果表明,我们提出的方法可以准确地识别关键测试用例,同时最大限度地减少误报,F1得分为0.849。这个结果可以帮助确定测试工作的优先级,节省时间和资源,同时提高测试过程的整体效率。关键词:回归测试测试用例优先级(TCP)隐马尔可夫模型(HMM)强化学习(RL)披露声明作者未报告潜在的利益冲突。关于贡献者的说明。Neelam Rawat是Sangam大学计算机科学与工程领域的专门研究学者。她拥有广泛的投资组合,包括超过15份出版物,3项专利和2本著作,她积极从事开创性的研究。她的主要专业领域是机器学习、深度学习、软件测试、软件工程、质量保证和管理。Vikas SomaniDr。Vikas Somani(博士,M.Tech, MCA,BCA)拥有超过16年的教学和工业经验。目前,他是印度比瓦拉Sangam大学工程与技术学院副教授兼院长助理。他在云计算、人工智能、机器学习、区块链和物联网(IoT)等领域有广泛的研究兴趣。他是IEEE, CSI, IAENG, ACM, IRED的成员。他在国际、国内期刊和会议上发表了超过35篇研究论文,并参加了大约50个研讨会和STP。指导和指导科研工作20余项。目前,他手下有6名研究人员在工作。他拥有三项专利,一项来自印度政府专利局,另一项来自德国专利局。他还发表了五项专利。Arun Kr. tripathr。Arun Kr. Tripathi拥有超过21年的教学经验,拥有计算机应用博士学位,专攻无线网络。目前,他被任命为计算机应用主管,并兼任网络安全和法医科学部主管。主要研究方向为计算机网络、网络安全、物联网、机器学习等,在国内外知名期刊和会议上发表论文70余篇。他审阅了超过35篇sci索引期刊文章。
{"title":"Prioritizing software regression testing using reinforcement learning and hidden Markov model","authors":"Neelam Rawat, Vikas Somani, Arun Kr. Tripathi","doi":"10.1080/1206212x.2023.2273585","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2273585","url":null,"abstract":"AbstractSoftware regression testing is an essential testing practice that ensures that changes made to the source code of an application do not affect its functionality and quality. Within this research, we introduce a novel method for prioritizing software test cases using a fusion of reinforcement learning and hidden Markov model to enhance the efficiency of the testing process. The primary objective of this research paper is to maximize the likelihood of selecting test cases that have the highest priority of uncovering defects in new code changes introduced into the codebase. To assess the efficacy of our suggested methodology, we experimented on the test cases of five web applications. Our results demonstrate that our proposed approach can accurately identify critical test cases while minimizing false positives, as evidenced by an F1 score of 0.849. This outcome can help prioritize testing efforts, saving time, and resources while improving the overall efficiency of the testing process.Keywords: Regression testingtest case prioritization (TCP)hidden Markov model (HMM)reinforcement learning (RL) Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsNeelam RawatMs. Neelam Rawat is a dedicated research scholar in the field of Computer Science & Engineering at Sangam University. With an extensive portfolio that includes over 15 publications, 3 patents, and 2 authored books, she is actively engaged in pioneering research. Her primary areas of expertise lie in the domains of machine learning, deep learning, software testing, software engineering, quality assurance, and management.Vikas SomaniDr. Vikas Somani (PhD, M.Tech, MCA,BCA) has more than 16 years of Teaching and Industrial Experience. Currently he is Associate Professor and Assistant Dean, School of Engineering and Technology at the Sangam University, Bhilwara. He has diversified research interests in the areas of Cloud Computing, Artificial Intelligence, Machine Learning, Block chain and Internet of Things (IoT). He is a Member of IEEE, CSI, IAENG, ACM, IRED. He has published over 35 Research Paper in International, National Journal and Conferences and attended around 50 Workshops and STP. He has also Supervised/Guided more than 20 Research Work. Currently, under his 6 research scholars are working. He has Three Patent awarded and granted/design one from Government of India Patent Office and another from Germany Patent Office. He has also published Five Patents.Arun Kr. TripathiDr. Arun Kr. Tripathi has more than 21 years of Teaching experience and completed Ph.D. in Computer Applications with specialization in Wireless Networks. Presently he is appointed as Head of Computer Applications with and an additional responsibility of Head Cyber Security and Forensic Science Division. His major research interests are Computer Network, Network Security, IoT, Machine Learning etc. with over 70 published works in reputed Journal","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"296 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.1080/1206212x.2023.2273011
Amina Bouhadja, Abdelkrim Bouramoul
{"title":"Beyond cocaine and heroin use: a stacking ensemble-based framework for predicting the likelihood of subsequent substance use disorder using demographics and personality traits","authors":"Amina Bouhadja, Abdelkrim Bouramoul","doi":"10.1080/1206212x.2023.2273011","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2273011","url":null,"abstract":"","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134907507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.1080/1206212x.2023.2273015
Chigurupati Ravi Swaroop, K. Raja
{"title":"AT-densenet with salp swarm optimization for outlier prediction","authors":"Chigurupati Ravi Swaroop, K. Raja","doi":"10.1080/1206212x.2023.2273015","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2273015","url":null,"abstract":"","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134907681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1080/1206212x.2023.2267839
Megha Gupta, Laxmi Ahuja, Ashish Seth
AbstractA hybrid chaotic-based DNA and multifactor authentication strategy are created to improve the protection of the cloud environment. Initially, multimodal data are collected from the data owner then the information is compressed by utilizing the deflate compression approach. The data is then encrypted using hybrid chaotic-based DNA cryptography to increase the security of data. In this hybrid algorithm DNA is used for the key generation and chaotic algorithm is utilized for the encryption process. On the other hand, a multifactor authentication method is created to access data from the cloud to block access by unauthorized users. In that technique, users are requested to enter the registered Password with the generated OTP from the mobile number. Then, the device serial number is another factor to verify the accessing device. Likewise, the user's fingerprint and iris recognition are also validated for accessing the data. The cloud-based data can be accessible following users' successful authentication. The simulation analysis shows that the encryption and decryption time reached for image, string and integer data is 24, 0.065, 37, 0.14, 28 and 0.14 s, respectively, for the cloud security algorithm. The proposed algorithm effectively mitigates space consumption and provides improved data security in a cloud environment.KEYWORDS: Cloud environmentsecuritycompressiondeflatehybrid chaotic-DNAmultifactor authentication AcknowledgementsAuthor express their deep sense of gratitude to the Founder President of Amity University, Dr. Ashok K. Chauhan for his keen interest in promoting research in the Amity University and have always been an inspiration for achieving great heights.Disclosure statementNo potential conflict of interest was reported by the author(s).Compliance with ethical standardsThis article is a completely original work of its authors; it has not been published before and will not be sent to other publications until the journal’s editorial board decides not to accept it for publication.Additional informationFundingThe authors declare that no funds, grants, or other support were received during the preparation of this manuscript.Notes on contributorsMegha GuptaMegha Gupta is research scholar pursuing Ph.D from AIIT, Amity University, and Noida under guidance of Prof. (Dr.) Laxmi Ahuja & Co-Guide Prof. (Dr.) Ashish Seth. She is Gold Medalist in M.tech from Jamia Hamdard, New Delhi. She received her B.TECH degree with HONOURS from B.M.I.E.T Sonipat. Her research areas include Cloud Computing, Security, Software Engineering, Software Testing, Computer Networks, Database Management systems and Big Data. She has published several research papers in reputed international and national journals and also participated and presented research papers in various international, national conferences.Laxmi AhujaProf. (Dr.) Laxmi Ahuja Ph.D(CSE) working as Professor in Amity Institute of Information Technology with the role ranging from Lecturer to Prof
摘要为了提高云环境的保护能力,提出了一种基于混沌DNA和多因素认证的混合策略。最初,从数据所有者处收集多模态数据,然后利用deflate压缩方法对信息进行压缩。然后使用基于混合混沌的DNA加密技术对数据进行加密,以增加数据的安全性。该混合算法采用DNA算法生成密钥,采用混沌算法进行加密。另一方面,创建了多因素身份验证方法来访问云中的数据,以阻止未经授权的用户访问。在该技术中,要求用户使用从移动号码生成的OTP输入注册密码。然后,设备序列号是验证访问设备的另一个因素。同样,用户的指纹和虹膜识别也要经过验证才能访问数据。用户身份验证成功后,可以访问基于云的数据。仿真分析表明,云安全算法对图像、字符串和整数数据的加解密时间分别为24、0.065、37、0.14、28和0.14 s。该算法有效降低了云环境下的空间消耗,提高了数据的安全性。关键词:云环境安全压缩压缩混合混沌-多因素认证致谢作者对Amity大学创始人校长Ashok K. Chauhan博士对推动Amity大学研究的浓厚兴趣表示深深的感谢,并一直激励着我们取得更大的成就。披露声明作者未报告潜在的利益冲突。本文完全是作者的原创作品;这篇文章之前没有发表过,在该杂志的编辑委员会决定不接受它发表之前,它不会被发送给其他出版物。作者声明在撰写本文期间没有收到任何资金、资助或其他支持。作者简介:tamegha Gupta是一名研究学者,在Laxmi Ahuja教授和共同指导教授Ashish Seth的指导下,在AIIT、Amity大学和Noida攻读博士学位。她是来自新德里贾米亚哈达德的理科硕士金牌得主。她以优异的成绩获得B.M.I.E.T Sonipat学士学位。她的研究领域包括云计算、安全、软件工程、软件测试、计算机网络、数据库管理系统和大数据。她在国际和国内知名期刊上发表了多篇研究论文,并参加了各种国际、国家会议并发表了研究论文。你的事迹AhujaProf。Laxmi Ahuja博士(CSE)在Amity信息技术研究所担任教授,在21年的时间里,从讲师到教授,再到系主任。在学者;她的研究领域包括搜索引擎、数据挖掘和软计算方法;在SCOPUS影响因子期刊(如施普林格,Elsevier,Inderscience等)的国际和国内会议和期刊上发表了100多篇研究论文。她以发明人的身份在信息技术领域成功申请了多项专利,这些专利已在印度政府专利部的“国际专利杂志”上发表。她出版了许多b施普林格书章节的讲义。组织各种国际会议并担任会议主席。她曾多次举办客座讲座、工作坊及教师发展计划。她是《可靠性、信息通信技术和优化(趋势和未来方向)国际会议论文集》的副主编。她积极参与研究活动。目前指导研究学者8人,指导博士研究生3人。出版了多部操作系统和信息技术相关书籍。她是IEEE的成员和印度计算机协会的终身会员,IETE - The Institute of Electronics and Telecommunications Engineers的前任副主席,IACSIT的高级成员,IEEE Society的成员。她积极参与这些社团的各种活动。她也是许多会议的技术主席成员。她在许多活动中被授予荣誉嘉宾。Ashish Seth,博士。SMIEEE, ACM杰出演讲者,现为韩国仁荷大学全球融合研究学院教授,2016年9月起派驻乌兹别克斯坦塔什干仁荷大学。
{"title":"Security enhancement in a cloud environment using a hybrid chaotic algorithm with multifactor verification for user authentication","authors":"Megha Gupta, Laxmi Ahuja, Ashish Seth","doi":"10.1080/1206212x.2023.2267839","DOIUrl":"https://doi.org/10.1080/1206212x.2023.2267839","url":null,"abstract":"AbstractA hybrid chaotic-based DNA and multifactor authentication strategy are created to improve the protection of the cloud environment. Initially, multimodal data are collected from the data owner then the information is compressed by utilizing the deflate compression approach. The data is then encrypted using hybrid chaotic-based DNA cryptography to increase the security of data. In this hybrid algorithm DNA is used for the key generation and chaotic algorithm is utilized for the encryption process. On the other hand, a multifactor authentication method is created to access data from the cloud to block access by unauthorized users. In that technique, users are requested to enter the registered Password with the generated OTP from the mobile number. Then, the device serial number is another factor to verify the accessing device. Likewise, the user's fingerprint and iris recognition are also validated for accessing the data. The cloud-based data can be accessible following users' successful authentication. The simulation analysis shows that the encryption and decryption time reached for image, string and integer data is 24, 0.065, 37, 0.14, 28 and 0.14 s, respectively, for the cloud security algorithm. The proposed algorithm effectively mitigates space consumption and provides improved data security in a cloud environment.KEYWORDS: Cloud environmentsecuritycompressiondeflatehybrid chaotic-DNAmultifactor authentication AcknowledgementsAuthor express their deep sense of gratitude to the Founder President of Amity University, Dr. Ashok K. Chauhan for his keen interest in promoting research in the Amity University and have always been an inspiration for achieving great heights.Disclosure statementNo potential conflict of interest was reported by the author(s).Compliance with ethical standardsThis article is a completely original work of its authors; it has not been published before and will not be sent to other publications until the journal’s editorial board decides not to accept it for publication.Additional informationFundingThe authors declare that no funds, grants, or other support were received during the preparation of this manuscript.Notes on contributorsMegha GuptaMegha Gupta is research scholar pursuing Ph.D from AIIT, Amity University, and Noida under guidance of Prof. (Dr.) Laxmi Ahuja & Co-Guide Prof. (Dr.) Ashish Seth. She is Gold Medalist in M.tech from Jamia Hamdard, New Delhi. She received her B.TECH degree with HONOURS from B.M.I.E.T Sonipat. Her research areas include Cloud Computing, Security, Software Engineering, Software Testing, Computer Networks, Database Management systems and Big Data. She has published several research papers in reputed international and national journals and also participated and presented research papers in various international, national conferences.Laxmi AhujaProf. (Dr.) Laxmi Ahuja Ph.D(CSE) working as Professor in Amity Institute of Information Technology with the role ranging from Lecturer to Prof","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135882961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}