首页 > 最新文献

International journal of machine learning and computing最新文献

英文 中文
Deep Learning Based Hybrid Network Architecture to Diagnose IoT Sensor Signal in Healthcare System 基于深度学习的混合网络架构诊断医疗系统中物联网传感器信号
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303011
S. S., M. S. Koti
IoT is a fascinating technology in today's IT world, in which items may transmit data and interact through intranet or internet networks. TheInternet of Things (IoT) has shown a lot of promise in connecting various medical equipment, sensors, and healthcare specialists to provide high-quality medical services from afar. As a result, patient safety has improved, healthcare expenses have fallen, healthcare service accessibility has increased, and operational efficiency has increased in the healthcare industry. Healthcare IoT signal analysis is now widely employed in clinics as a critical diagnostic tool for diagnosing health issues. In the medical domain, automated identification and classification technologies help clinicians make more accurate and timely diagnoses. In this paper, we have proposed a Deep Learning-Based hybrid network architecture (CNN-R-LSTM (DCRL)) that combines the characteristics of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) based long-short-term memory (LSTM) to diagnose IoT sensor signals and classify them into three categories: healthy, patient, and serious illness. Deep CNN-R-LSTM Algorithm is used for classify the IoT healthcare data support via a dedicated neural networking model. For our study, we have used the MIT-BIH dataset, the Pima Indians Diabetes dataset, the BP dataset, and the Cleveland Cardiology datasets. The experimental results revealed great classification performance in accuracy, specificity, and sensitivity, with 99.02 percent, 99.47 percent, and 99.56 percent, respectively. Our proposed DCLR model is based on healthcare IoT Centre inputs enhanced with the centenary, which may aid clinicians in effectively recognizing the health condition.
物联网是当今IT世界中一项引人入胜的技术,其中物品可以通过内部网或互联网传输数据并进行交互。物联网(IoT)在连接各种医疗设备、传感器和医疗保健专家以提供远程高质量医疗服务方面显示出了很大的前景。因此,患者安全得到改善,医疗保健费用下降,医疗保健服务可及性增加,医疗保健行业的运营效率提高。医疗物联网信号分析现已广泛应用于诊所,作为诊断健康问题的关键诊断工具。在医疗领域,自动识别和分类技术帮助临床医生做出更准确和及时的诊断。在本文中,我们提出了一种基于深度学习的混合网络架构(CNN- r -LSTM (DCRL)),该架构结合了卷积神经网络(CNN)和基于循环神经网络(RNN)的长短期记忆(LSTM)的特征来诊断物联网传感器信号,并将其分为三类:健康、患者和严重疾病。深度CNN-R-LSTM算法通过专用的神经网络模型对物联网医疗数据支持进行分类。在我们的研究中,我们使用了MIT-BIH数据集、皮马印第安人糖尿病数据集、BP数据集和克利夫兰心脏病学数据集。实验结果表明,该方法在准确率、特异性和敏感性方面具有良好的分类性能,分别达到99.02%、99.47%和99.56%。我们提出的DCLR模型是基于医疗物联网中心的输入,这可以帮助临床医生有效地识别健康状况。
{"title":"Deep Learning Based Hybrid Network Architecture to Diagnose IoT Sensor Signal in Healthcare System","authors":"S. S., M. S. Koti","doi":"10.53759/7669/jmc202303011","DOIUrl":"https://doi.org/10.53759/7669/jmc202303011","url":null,"abstract":"IoT is a fascinating technology in today's IT world, in which items may transmit data and interact through intranet or internet networks. TheInternet of Things (IoT) has shown a lot of promise in connecting various medical equipment, sensors, and healthcare specialists to provide high-quality medical services from afar. As a result, patient safety has improved, healthcare expenses have fallen, healthcare service accessibility has increased, and operational efficiency has increased in the healthcare industry. Healthcare IoT signal analysis is now widely employed in clinics as a critical diagnostic tool for diagnosing health issues. In the medical domain, automated identification and classification technologies help clinicians make more accurate and timely diagnoses. In this paper, we have proposed a Deep Learning-Based hybrid network architecture (CNN-R-LSTM (DCRL)) that combines the characteristics of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) based long-short-term memory (LSTM) to diagnose IoT sensor signals and classify them into three categories: healthy, patient, and serious illness. Deep CNN-R-LSTM Algorithm is used for classify the IoT healthcare data support via a dedicated neural networking model. For our study, we have used the MIT-BIH dataset, the Pima Indians Diabetes dataset, the BP dataset, and the Cleveland Cardiology datasets. The experimental results revealed great classification performance in accuracy, specificity, and sensitivity, with 99.02 percent, 99.47 percent, and 99.56 percent, respectively. Our proposed DCLR model is based on healthcare IoT Centre inputs enhanced with the centenary, which may aid clinicians in effectively recognizing the health condition.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72610449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficient Clustering and Routing Using Hybrid Fuzzy with Modified Rider Optimization Algorithm in IoT - Enabled Wireless Body Area Network 基于混合模糊和改进骑手优化算法的物联网无线体域网络节能聚类和路由
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303016
D. A, Rangaraj J
Wireless sensor networks are widely used in various Internet of Things applications, including healthcare, underwater sensor networks, body area networks, and multiple offices. Wireless Body Area Network (WBAN) simplifies medical department tasks and provides a solution that reduces the possibility of errors in the medical diagnostic process. The growing demand for real-time applications in such networks will stimulate significant research activity. Designing scenarios for such critical events while maintaining energy efficiency is difficult due to dynamic changes in network topology, strict power constraints, and limited computing power. The routing protocol design becomes crucial to WBAN and significantly impacts the communication stack and network performance. High node mobility in WBAN results in quick topology changes, affecting network scalability. Node clustering is one of many other mechanisms used in WBANs to address this issue. We consider optimization factors like distance, latency, and power consumption of IoT devices to achieve the desired CH selection. This paper proposes a high-level CH selection and routing approach using a hybrid fuzzy with a modified Rider Optimization Algorithm (MROA). This research work is implemented using MATLAB software. The simulations are carried out under a range of conditions. In terms of energy consumption and network life time, the proposed scheme outperforms current state-of-the-art techniques like Low Energy Adaptive Clustering Hierarchy (LEACH), Energy Control Routing Algorithm (ECCRA), Energy Efficient Routing Protocol (EERP), and Simplified Energy Balancing Alternative Aware Routing Algorithm (SEAR).
无线传感器网络广泛应用于各种物联网应用,包括医疗保健、水下传感器网络、体域网络和多个办公室。无线体域网络(WBAN)简化了医疗部门的工作,提供了一种减少医疗诊断过程中错误可能性的解决方案。这种网络对实时应用日益增长的需求将刺激重要的研究活动。由于网络拓扑的动态变化、严格的功率约束和有限的计算能力,在保持能源效率的同时设计此类关键事件的场景是很困难的。路由协议的设计对无线局域网的通信栈和网络性能有着重要的影响。WBAN中节点的高移动性导致拓扑变化快,影响网络的可扩展性。节点集群是wban中用于解决此问题的许多其他机制之一。我们考虑了物联网设备的距离、延迟和功耗等优化因素,以实现所需的CH选择。本文提出了一种采用混合模糊和改进的骑手优化算法(MROA)的高级CH选择和路由方法。本研究工作是利用MATLAB软件实现的。模拟是在一系列条件下进行的。在能耗和网络寿命方面,所提出的方案优于当前最先进的技术,如低能量自适应聚类层次(LEACH)、能量控制路由算法(ECCRA)、节能路由协议(EERP)和简化能量平衡替代感知路由算法(SEAR)。
{"title":"Energy Efficient Clustering and Routing Using Hybrid Fuzzy with Modified Rider Optimization Algorithm in IoT - Enabled Wireless Body Area Network","authors":"D. A, Rangaraj J","doi":"10.53759/7669/jmc202303016","DOIUrl":"https://doi.org/10.53759/7669/jmc202303016","url":null,"abstract":"Wireless sensor networks are widely used in various Internet of Things applications, including healthcare, underwater sensor networks, body area networks, and multiple offices. Wireless Body Area Network (WBAN) simplifies medical department tasks and provides a solution that reduces the possibility of errors in the medical diagnostic process. The growing demand for real-time applications in such networks will stimulate significant research activity. Designing scenarios for such critical events while maintaining energy efficiency is difficult due to dynamic changes in network topology, strict power constraints, and limited computing power. The routing protocol design becomes crucial to WBAN and significantly impacts the communication stack and network performance. High node mobility in WBAN results in quick topology changes, affecting network scalability. Node clustering is one of many other mechanisms used in WBANs to address this issue. We consider optimization factors like distance, latency, and power consumption of IoT devices to achieve the desired CH selection. This paper proposes a high-level CH selection and routing approach using a hybrid fuzzy with a modified Rider Optimization Algorithm (MROA). This research work is implemented using MATLAB software. The simulations are carried out under a range of conditions. In terms of energy consumption and network life time, the proposed scheme outperforms current state-of-the-art techniques like Low Energy Adaptive Clustering Hierarchy (LEACH), Energy Control Routing Algorithm (ECCRA), Energy Efficient Routing Protocol (EERP), and Simplified Energy Balancing Alternative Aware Routing Algorithm (SEAR).","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75100864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Insight on Optimization Techniques for Uncertain and Reliable Routing in Wireless Body Area Networks 无线体域网络中不确定和可靠路由的优化技术研究
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303013
K. Sakthivel, Rajkumar Ganesan
In recent times, Wireless Body Area Networks a subsection of Wireless Sensor Network is a promising technology for future healthcare realm with cutting-edge technologies that can assist healthcare professionals like doctors, nurses and biomedical engineers. Machine Learning and Internet of Things enabled medical big data is the future of healthcare sector and Medical Technology-based industries leading to applications in other sectors such as fitness tracking for commercial purposes, Sportsperson health monitoring to track their day-to-day activities and wearable devices for critical and emergency care. This comprehensive review article addresses Wireless Body Area Network state-of-art and the dependence of Optimization Techniques and Meta-heuristic Algorithms for an efficient routing path between two sensor nodes: source node and destination node and it plays an effective role in optimizing the network parameters such as radio range, energy consumption, throughput, data aggregation, clustering and routing. Designing of energy-efficient routing for wireless body are network is such a challenging task due to uncertainty in dynamic network topology, energy constraints and limited power consumption. Optimization Techniques can help the researchers to achieve the drawbacks mentioned above and energy-efficiency of the network can be improved. In this article, we focus majorly on the efficiency of usage of optimization algorithms for Wireless Body Area Network routing mechanisms and a summary of its earlier studies during 2012-2023 epoch. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Artificial Bee Colony and Firefly Optimization algorithms were discussed on achieving local optima for better results through optimization. This article provides an insight into existing gaps and further modifications to the researchers in WBAN that can motivate them to propose new ideas for reliable solutions. Performance comparison and evaluation of different bio-inspired optimization algorithms has been discussed for further improvement in optimized routing algorithms.
近年来,无线体域网络(Wireless Body Area Networks)作为无线传感器网络(Wireless Sensor Network)的一个分支,以其尖端的技术为医生、护士和生物医学工程师等医疗保健专业人员提供帮助,是未来医疗保健领域的一项有前景的技术。机器学习和物联网支持的医疗大数据是医疗保健行业和医疗技术行业的未来,导致其他领域的应用,如用于商业目的的健身跟踪、用于跟踪其日常活动的Sportsperson健康监测以及用于关键和紧急护理的可穿戴设备。本文综合评述了无线体域网络的现状,以及优化技术和元启发式算法在源节点和目的节点两个传感器节点之间高效路由路径的依赖关系,它在优化无线电距离、能耗、吞吐量、数据聚合、聚类和路由等网络参数方面发挥了有效的作用。由于动态网络拓扑结构的不确定性、能量约束和有限的功耗限制,无线主体网络的节能路由设计是一项具有挑战性的任务。优化技术可以帮助研究人员克服上述缺点,提高网络的能量效率。在本文中,我们主要关注无线体域网络路由机制的优化算法的使用效率,并总结了2012-2023时期的早期研究。讨论了遗传算法、粒子群算法、蚁群算法、人工蜂群算法和萤火虫算法等实现局部最优,通过优化获得更好的结果。本文为WBAN研究人员提供了现有差距和进一步修改的见解,可以激励他们提出可靠解决方案的新想法。讨论了不同仿生优化算法的性能比较和评价,以进一步改进优化后的路由算法。
{"title":"An Insight on Optimization Techniques for Uncertain and Reliable Routing in Wireless Body Area Networks","authors":"K. Sakthivel, Rajkumar Ganesan","doi":"10.53759/7669/jmc202303013","DOIUrl":"https://doi.org/10.53759/7669/jmc202303013","url":null,"abstract":"In recent times, Wireless Body Area Networks a subsection of Wireless Sensor Network is a promising technology for future healthcare realm with cutting-edge technologies that can assist healthcare professionals like doctors, nurses and biomedical engineers. Machine Learning and Internet of Things enabled medical big data is the future of healthcare sector and Medical Technology-based industries leading to applications in other sectors such as fitness tracking for commercial purposes, Sportsperson health monitoring to track their day-to-day activities and wearable devices for critical and emergency care. This comprehensive review article addresses Wireless Body Area Network state-of-art and the dependence of Optimization Techniques and Meta-heuristic Algorithms for an efficient routing path between two sensor nodes: source node and destination node and it plays an effective role in optimizing the network parameters such as radio range, energy consumption, throughput, data aggregation, clustering and routing. Designing of energy-efficient routing for wireless body are network is such a challenging task due to uncertainty in dynamic network topology, energy constraints and limited power consumption. Optimization Techniques can help the researchers to achieve the drawbacks mentioned above and energy-efficiency of the network can be improved. In this article, we focus majorly on the efficiency of usage of optimization algorithms for Wireless Body Area Network routing mechanisms and a summary of its earlier studies during 2012-2023 epoch. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Artificial Bee Colony and Firefly Optimization algorithms were discussed on achieving local optima for better results through optimization. This article provides an insight into existing gaps and further modifications to the researchers in WBAN that can motivate them to propose new ideas for reliable solutions. Performance comparison and evaluation of different bio-inspired optimization algorithms has been discussed for further improvement in optimized routing algorithms.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74216798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance of Neural Computing Techniques in Communication Networks 神经计算技术在通信网络中的性能
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303010
Junho Jeong
This research investigates the use of neural computing techniques in communication networks and evaluates their performance based on error rate, delay, and throughput. The results indicate that different neural computing techniques, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) and Generative Adversarial Networks (GANs) have different trade-offs in terms of their effectiveness in improving performance. The selection of technique will base on the particular requirements of the application. The research also evaluates the relative performance of different communication network architectures and identified the trade-offs and limitations associated with the application of different techniques in communication networks. The research suggests that further research is needed to explore the use of techniques, such as deep reinforcement learning; in communication networks and to investigate how the employment of techniques can be used to improve the security and robustness of communication networks.
本研究探讨了神经计算技术在通信网络中的应用,并基于错误率、延迟和吞吐量评估了它们的性能。结果表明,不同的神经计算技术,如人工神经网络(ann)、卷积神经网络(cnn)、循环神经网络(RNNs)、长短期记忆(LSTM)和生成对抗网络(gan),在提高性能的有效性方面具有不同的权衡。技术的选择将基于应用的特殊要求。研究还评估了不同通信网络架构的相对性能,并确定了在通信网络中应用不同技术的权衡和限制。研究表明,需要进一步的研究来探索技术的使用,例如深度强化学习;在通信网络中,并研究如何使用技术来提高通信网络的安全性和鲁棒性。
{"title":"Performance of Neural Computing Techniques in Communication Networks","authors":"Junho Jeong","doi":"10.53759/7669/jmc202303010","DOIUrl":"https://doi.org/10.53759/7669/jmc202303010","url":null,"abstract":"This research investigates the use of neural computing techniques in communication networks and evaluates their performance based on error rate, delay, and throughput. The results indicate that different neural computing techniques, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) and Generative Adversarial Networks (GANs) have different trade-offs in terms of their effectiveness in improving performance. The selection of technique will base on the particular requirements of the application. The research also evaluates the relative performance of different communication network architectures and identified the trade-offs and limitations associated with the application of different techniques in communication networks. The research suggests that further research is needed to explore the use of techniques, such as deep reinforcement learning; in communication networks and to investigate how the employment of techniques can be used to improve the security and robustness of communication networks.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83317633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human Intelligence and Value of Machine Advancements in Cognitive Science A Design thinking Approach 认知科学中的人类智能与机器价值——设计思维方法
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303015
Akshaya V S, Beatriz Lúcia Salvador Bizotto, Mithileysh Sathiyanarayanan
Latent Semantic Analysis (LSA) is an approach used for expressing and extracting textual meanings using statistical evaluations or modeling applied to vast corpora of text, and its development has been a major motivation for this study to understand the design thinking approach. We introduced LSA and gave some instances of how it might be used to further our knowledge of cognition and to develop practical technology. Since LSA's inception, other alternative statistical models for meaning detection and analysis in text corpora have been created, tested, and refined. This study demonstrates the value that statistical models of semantics provide to the study of cognitive science and the development of cognition. These models are particularly useful because they enable researchers to study a wide range of problems pertaining to knowledge, discourse perception, text cognition, and language using expansive representations of human intelligence.
潜在语义分析(Latent Semantic Analysis, LSA)是一种应用于大量文本语料库的统计评估或建模来表达和提取文本含义的方法,它的发展是本研究理解设计思维方法的主要动机。我们介绍了LSA,并给出了一些实例,说明如何使用LSA来进一步了解我们的认知和开发实用技术。自从LSA诞生以来,其他用于文本语料库意义检测和分析的统计模型已经被创建、测试和改进。本研究证明了语义学统计模型对认知科学研究和认知发展的价值。这些模型特别有用,因为它们使研究人员能够使用人类智能的扩展表示来研究与知识、话语感知、文本认知和语言有关的广泛问题。
{"title":"Human Intelligence and Value of Machine Advancements in Cognitive Science A Design thinking Approach","authors":"Akshaya V S, Beatriz Lúcia Salvador Bizotto, Mithileysh Sathiyanarayanan","doi":"10.53759/7669/jmc202303015","DOIUrl":"https://doi.org/10.53759/7669/jmc202303015","url":null,"abstract":"Latent Semantic Analysis (LSA) is an approach used for expressing and extracting textual meanings using statistical evaluations or modeling applied to vast corpora of text, and its development has been a major motivation for this study to understand the design thinking approach. We introduced LSA and gave some instances of how it might be used to further our knowledge of cognition and to develop practical technology. Since LSA's inception, other alternative statistical models for meaning detection and analysis in text corpora have been created, tested, and refined. This study demonstrates the value that statistical models of semantics provide to the study of cognitive science and the development of cognition. These models are particularly useful because they enable researchers to study a wide range of problems pertaining to knowledge, discourse perception, text cognition, and language using expansive representations of human intelligence.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90619370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LM-GA: A Novel IDS with AES and Machine Learning Architecture for Enhanced Cloud Storage Security LM-GA:一种基于AES和机器学习架构的新型IDS,用于增强云存储安全性
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303008
Thilagam T, Aruna R
Cloud Computing (CC) is a relatively new technology that allows for widespread access and storage on the internet. Despite its low cost and numerous benefits, cloud technology still confronts several obstacles, including data loss, quality concerns, and data security like recurring hacking. The security of data stored in the cloud has become a major worry for both Cloud Service Providers (CSPs) and users. As a result, a powerful Intrusion Detection System (IDS) must be set up to detect and prevent possible cloud threats at an early stage. Intending to develop a novel IDS system, this paper introduces a new optimization concept named Lion Mutated-Genetic Algorithm (LM-GA) with the hybridization of Machine Learning (ML) algorithms such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Initially, the input text data is preprocessed and balanced to avoid redundancy and vague data. The preprocessed data is then subjected to the hybrid Deep Learning (DL) models namely the CNN-LSTM model to get the IDS output. Now, the intruded are discarded and non-intruded data are secured using Advanced Encryption Standard (AES) encryption model. Besides, the optimal key selection is done by the proposed LM-GA model and the cipher text is further secured via the steganography approach. NSL-KDD and UNSW-NB15 are the datasets used to verify the performance of the proposed LM-GA-based IDS in terms of average intrusion detection rate, accuracy, precision, recall, and F-Score.
云计算(CC)是一种相对较新的技术,它允许在互联网上广泛访问和存储。尽管云技术成本低、好处多多,但它仍然面临着一些障碍,包括数据丢失、质量问题和数据安全(如反复出现的黑客攻击)。存储在云中的数据的安全性已经成为云服务提供商(csp)和用户的主要担忧。因此,必须建立一个强大的入侵检测系统(IDS),以便在早期阶段检测和防止可能的云威胁。为了开发一种新型的入侵检测系统,本文引入了卷积神经网络(CNN)和长短期记忆(LSTM)等机器学习(ML)算法的混合优化概念——狮子突变遗传算法(LM-GA)。首先对输入的文本数据进行预处理和平衡,避免冗余和模糊数据。然后将预处理后的数据进行混合深度学习(DL)模型,即CNN-LSTM模型,以获得IDS输出。现在,被入侵的数据被丢弃,未被入侵的数据使用高级加密标准(Advanced Encryption Standard, AES)加密模型进行保护。此外,利用LM-GA模型进行了最优密钥选择,并通过隐写方法进一步保护了密文。NSL-KDD和UNSW-NB15是用于验证基于lm - ga的入侵检测系统在平均入侵检测率、准确率、精密度、召回率和F-Score方面的性能的数据集。
{"title":"LM-GA: A Novel IDS with AES and Machine Learning Architecture for Enhanced Cloud Storage Security","authors":"Thilagam T, Aruna R","doi":"10.53759/7669/jmc202303008","DOIUrl":"https://doi.org/10.53759/7669/jmc202303008","url":null,"abstract":"Cloud Computing (CC) is a relatively new technology that allows for widespread access and storage on the internet. Despite its low cost and numerous benefits, cloud technology still confronts several obstacles, including data loss, quality concerns, and data security like recurring hacking. The security of data stored in the cloud has become a major worry for both Cloud Service Providers (CSPs) and users. As a result, a powerful Intrusion Detection System (IDS) must be set up to detect and prevent possible cloud threats at an early stage. Intending to develop a novel IDS system, this paper introduces a new optimization concept named Lion Mutated-Genetic Algorithm (LM-GA) with the hybridization of Machine Learning (ML) algorithms such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Initially, the input text data is preprocessed and balanced to avoid redundancy and vague data. The preprocessed data is then subjected to the hybrid Deep Learning (DL) models namely the CNN-LSTM model to get the IDS output. Now, the intruded are discarded and non-intruded data are secured using Advanced Encryption Standard (AES) encryption model. Besides, the optimal key selection is done by the proposed LM-GA model and the cipher text is further secured via the steganography approach. NSL-KDD and UNSW-NB15 are the datasets used to verify the performance of the proposed LM-GA-based IDS in terms of average intrusion detection rate, accuracy, precision, recall, and F-Score.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75756556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhanced Security for Large-Scale 6G Cloud Computing: A Novel Approach to Identity based Encryption Key Generation 大规模6G云计算增强安全性:一种基于身份的加密密钥生成新方法
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303009
Gopal Rathinam, B. M., Arulkumar V, Kumaresan M, A. S, Bhuvana J
Cloud computing and 6G networks are in high demand at present due to their appealing features as well as the security of data stored in the cloud. There are various challenging methods that are computationally complicated that can be used in cloud security. Identity-based encryption (IBE) is the most widely used techniques for protecting data transmitted over the cloud. To prevent a malicious attack, it is an access policy that restricts access to legible data to only authorized users. The four stages of IBE are setup, key extraction or generation, decryption and encryption. Key generation is a necessary and time-consuming phase in the creation of a security key. The creation of uncrackable and non-derivable secure keys is a difficult computational and decisional task. In order to prevent user identities from being leaked, even if an opponent or attacker manages to encrypted material or to decode the key this study presents an advanced identity-based encryption technique with an equality test. The results of the experiments demonstrate that the proposed algorithm encrypts and decrypts data faster than the efficient selective-ID secure IBE strategy, a competitive approach. The proposed method's ability to conceal the identity of the user by utilizing the Lagrange coefficient, which is constituted of a polynomial interpolation function, is one of its most significant aspects.
云计算和6G网络由于其吸引人的特性以及存储在云中的数据的安全性,目前需求量很大。有各种具有挑战性的方法,这些方法在计算上很复杂,可以用于云安全。基于身份的加密(IBE)是用于保护通过云传输的数据的最广泛使用的技术。为了防止恶意攻击,它是一种访问策略,限制只有授权用户才能访问可读数据。IBE的四个阶段是设置、密钥提取或生成、解密和加密。密钥生成是创建安全密钥的必要且耗时的阶段。创建不可破解和不可衍生的安全密钥是一项困难的计算和决策任务。为了防止用户身份被泄露,即使对手或攻击者设法加密材料或解码密钥,本研究提出了一种先进的基于身份的加密技术,并进行了相等性测试。实验结果表明,该算法的数据加密和解密速度比高效的选择性id安全IBE策略更快。该方法利用多项式插值函数构成的拉格朗日系数隐藏用户身份的能力是其最重要的方面之一。
{"title":"Enhanced Security for Large-Scale 6G Cloud Computing: A Novel Approach to Identity based Encryption Key Generation","authors":"Gopal Rathinam, B. M., Arulkumar V, Kumaresan M, A. S, Bhuvana J","doi":"10.53759/7669/jmc202303009","DOIUrl":"https://doi.org/10.53759/7669/jmc202303009","url":null,"abstract":"Cloud computing and 6G networks are in high demand at present due to their appealing features as well as the security of data stored in the cloud. There are various challenging methods that are computationally complicated that can be used in cloud security. Identity-based encryption (IBE) is the most widely used techniques for protecting data transmitted over the cloud. To prevent a malicious attack, it is an access policy that restricts access to legible data to only authorized users. The four stages of IBE are setup, key extraction or generation, decryption and encryption. Key generation is a necessary and time-consuming phase in the creation of a security key. The creation of uncrackable and non-derivable secure keys is a difficult computational and decisional task. In order to prevent user identities from being leaked, even if an opponent or attacker manages to encrypted material or to decode the key this study presents an advanced identity-based encryption technique with an equality test. The results of the experiments demonstrate that the proposed algorithm encrypts and decrypts data faster than the efficient selective-ID secure IBE strategy, a competitive approach. The proposed method's ability to conceal the identity of the user by utilizing the Lagrange coefficient, which is constituted of a polynomial interpolation function, is one of its most significant aspects.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77628519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Design and Development of Multi-Sensor ADEP for Bore Wells Integrated with IoT Enabled Monitoring Framework 集成物联网监测框架的井筒多传感器ADEP设计与开发
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303014
S. K, L. J., J. M, Balamurugan Easwaran
Typically, about 51% of the groundwater satisfies the drinking water worldwide and is regarded as the major source for the purpose of irrigation. Moreover, the monitoring and assessment of groundwater over bore wells is essential to identify the effect of seasonal changes, precipitations, and the extraction of water. Hence, there is a need to design a depth sensor probe for bore wells so as to analyze/monitor the quality of underground water thereby estimating any geophysical variations like landslides/earthquakes. Once the depth sensor probe is designed, the data is collected over wireless sensor network (WSN) medium and is stored in cloud for further monitoring and analyzing purposes. WSN is the major promising technologies that offer the real-time monitoring opportunities for geographical areas. The wireless medium in turn senses and gathers data like rainfall, movement, vibration, moisture, hydrological and geological aspects of soil that helps in better understanding of landslide or earthquake disasters. In this paper, the design and development of geophysical sensor probe for the deep bore well so as to monitor and collect the data like geological and hydrological conditions. The data collected is then transmitted by wireless network to analyze the geological changes which can cause natural disaster and water quality assessment.
一般来说,全世界约有51%的地下水可以满足饮用水需求,被认为是灌溉用水的主要来源。此外,对钻孔地下水的监测和评价对于确定季节变化、降水和取水的影响至关重要。因此,有必要为钻孔设计一个深度传感器探头,以便分析/监测地下水的质量,从而估计任何地球物理变化,如滑坡/地震。一旦深度传感器探头设计好,数据就会通过无线传感器网络(WSN)介质收集,并存储在云中,以便进一步监测和分析。无线传感器网络是为地理区域提供实时监测机会的主要有前途的技术。无线媒介反过来感知和收集数据,如降雨、运动、振动、湿度、水文和土壤的地质方面,有助于更好地了解滑坡或地震灾害。本文设计并研制了用于深井的地球物理传感器探头,以监测和采集地质、水文条件等数据。收集到的数据通过无线网络传输,用于地质变化的自然灾害分析和水质评价。
{"title":"Design and Development of Multi-Sensor ADEP for Bore Wells Integrated with IoT Enabled Monitoring Framework","authors":"S. K, L. J., J. M, Balamurugan Easwaran","doi":"10.53759/7669/jmc202303014","DOIUrl":"https://doi.org/10.53759/7669/jmc202303014","url":null,"abstract":"Typically, about 51% of the groundwater satisfies the drinking water worldwide and is regarded as the major source for the purpose of irrigation. Moreover, the monitoring and assessment of groundwater over bore wells is essential to identify the effect of seasonal changes, precipitations, and the extraction of water. Hence, there is a need to design a depth sensor probe for bore wells so as to analyze/monitor the quality of underground water thereby estimating any geophysical variations like landslides/earthquakes. Once the depth sensor probe is designed, the data is collected over wireless sensor network (WSN) medium and is stored in cloud for further monitoring and analyzing purposes. WSN is the major promising technologies that offer the real-time monitoring opportunities for geographical areas. The wireless medium in turn senses and gathers data like rainfall, movement, vibration, moisture, hydrological and geological aspects of soil that helps in better understanding of landslide or earthquake disasters. In this paper, the design and development of geophysical sensor probe for the deep bore well so as to monitor and collect the data like geological and hydrological conditions. The data collected is then transmitted by wireless network to analyze the geological changes which can cause natural disaster and water quality assessment.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87175892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Different Numerical Techniques, Modeling and Simulation in Solving Complex Problems 解决复杂问题的不同数值技术、建模和仿真
Pub Date : 2023-04-05 DOI: 10.53759/7669/jmc202303007
Seng-Phil Hong
This study investigates the performance of different numerical techniques, modeling, and simulation in solving complex problems. The study found that the Finite Element Method was found to be the most precise numerical approach for simulating the behavior of structures under loading conditions, the Finite Difference Method was found to be the most efficient numerical technique for simulating fluid flow and heat transfer problems, and the Boundary Element Method was found to be the most effective numerical technique for solving problems involving singularities, such as those found in acoustics and electromagnetics. The mathematical model established in this research was able to effectively forecast the behaviors of the system under different conditions, with an error of less than 5%. The physical model established in this research was able to replicate the behavior of the system under different conditions, with an error of less than 2%. The employment of multi-physics or multi-scale modeling was found to be effective in overcoming the limitations of traditional numerical techniques. The results of this research have significant effects for the field of numerical techniques, modeling and simulation, and can be used to guide engineers and researchers in choosing the most appropriate numerical technique for their specific problem or application.
本研究探讨不同数值技术、建模和模拟在解决复杂问题中的表现。研究发现,有限元法是模拟结构在载荷条件下行为的最精确的数值方法,有限差分法是模拟流体流动和传热问题的最有效的数值技术,边界元法是解决涉及奇点的问题的最有效的数值技术,如声学和电磁学中的问题。本研究建立的数学模型能够有效预测系统在不同条件下的行为,误差小于5%。本研究建立的物理模型能够复制系统在不同条件下的行为,误差小于2%。采用多物理场或多尺度模拟可以有效地克服传统数值方法的局限性。本研究结果对数值技术、建模和仿真领域具有重要影响,可用于指导工程师和研究人员针对其具体问题或应用选择最合适的数值技术。
{"title":"Different Numerical Techniques, Modeling and Simulation in Solving Complex Problems","authors":"Seng-Phil Hong","doi":"10.53759/7669/jmc202303007","DOIUrl":"https://doi.org/10.53759/7669/jmc202303007","url":null,"abstract":"This study investigates the performance of different numerical techniques, modeling, and simulation in solving complex problems. The study found that the Finite Element Method was found to be the most precise numerical approach for simulating the behavior of structures under loading conditions, the Finite Difference Method was found to be the most efficient numerical technique for simulating fluid flow and heat transfer problems, and the Boundary Element Method was found to be the most effective numerical technique for solving problems involving singularities, such as those found in acoustics and electromagnetics. The mathematical model established in this research was able to effectively forecast the behaviors of the system under different conditions, with an error of less than 5%. The physical model established in this research was able to replicate the behavior of the system under different conditions, with an error of less than 2%. The employment of multi-physics or multi-scale modeling was found to be effective in overcoming the limitations of traditional numerical techniques. The results of this research have significant effects for the field of numerical techniques, modeling and simulation, and can be used to guide engineers and researchers in choosing the most appropriate numerical technique for their specific problem or application.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84915375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Efficient Priority Queue Data Structure for Big Data Applications 面向大数据应用的高效优先队列数据结构
Pub Date : 2023-04-01 DOI: 10.18178/ijml.2023.13.2.1129
James Rhodes, E. Doncker
 Abstract —We have designed and developed an efficient priority queue data structure that utilizes buckets into which data elements are inserted and from which data elements are deleted. The data structure leverages hashing to determine the appropriate bucket to place a data element based on the data element’s key value. This allows the data structure to access data elements that are in the queue with an O(1) time complexity. Heaps access data elements that are in the queue with an O(log n) time complexity, where n is the number of nodes on the heap. Thus, the data structure improves the performance of applications that utilize a min/max heap. Targeted areas include big data applications, data science, artificial intelligence, and parallel processing. In this paper, we present results several applications. We demonstrate that the data structure when used to replace a min/max heap improves the performance applications by reducing the execution time. The performance improvement increases as the number of data elements placed in the queue increases. Also, in addition to being designed as a double-ended priority queue (DEPQ), the data structure can be configured to be a queue (FIFO), a stack (LIFO), and a set (which doesn’t allow duplicates).
摘要-我们设计并开发了一种高效的优先队列数据结构,该结构利用桶来插入数据元素并从中删除数据元素。数据结构利用散列来根据数据元素的键值确定放置数据元素的适当桶。这允许数据结构以0(1)的时间复杂度访问队列中的数据元素。堆访问队列中的数据元素的时间复杂度为O(log n),其中n是堆上的节点数。因此,这种数据结构提高了利用最小/最大堆的应用程序的性能。目标领域包括大数据应用、数据科学、人工智能和并行处理。在本文中,我们给出了一些应用结果。我们演示了当使用该数据结构替换min/max堆时,通过减少执行时间来提高应用程序的性能。随着放置在队列中的数据元素数量的增加,性能的提高也会增加。此外,除了被设计为双端优先级队列(DEPQ)之外,数据结构还可以配置为队列(FIFO)、堆栈(LIFO)和集合(不允许重复)。
{"title":"An Efficient Priority Queue Data Structure for Big Data Applications","authors":"James Rhodes, E. Doncker","doi":"10.18178/ijml.2023.13.2.1129","DOIUrl":"https://doi.org/10.18178/ijml.2023.13.2.1129","url":null,"abstract":" Abstract —We have designed and developed an efficient priority queue data structure that utilizes buckets into which data elements are inserted and from which data elements are deleted. The data structure leverages hashing to determine the appropriate bucket to place a data element based on the data element’s key value. This allows the data structure to access data elements that are in the queue with an O(1) time complexity. Heaps access data elements that are in the queue with an O(log n) time complexity, where n is the number of nodes on the heap. Thus, the data structure improves the performance of applications that utilize a min/max heap. Targeted areas include big data applications, data science, artificial intelligence, and parallel processing. In this paper, we present results several applications. We demonstrate that the data structure when used to replace a min/max heap improves the performance applications by reducing the execution time. The performance improvement increases as the number of data elements placed in the queue increases. Also, in addition to being designed as a double-ended priority queue (DEPQ), the data structure can be configured to be a queue (FIFO), a stack (LIFO), and a set (which doesn’t allow duplicates).","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"359 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75424796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International journal of machine learning and computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1