Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303011
S. S., M. S. Koti
IoT is a fascinating technology in today's IT world, in which items may transmit data and interact through intranet or internet networks. TheInternet of Things (IoT) has shown a lot of promise in connecting various medical equipment, sensors, and healthcare specialists to provide high-quality medical services from afar. As a result, patient safety has improved, healthcare expenses have fallen, healthcare service accessibility has increased, and operational efficiency has increased in the healthcare industry. Healthcare IoT signal analysis is now widely employed in clinics as a critical diagnostic tool for diagnosing health issues. In the medical domain, automated identification and classification technologies help clinicians make more accurate and timely diagnoses. In this paper, we have proposed a Deep Learning-Based hybrid network architecture (CNN-R-LSTM (DCRL)) that combines the characteristics of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) based long-short-term memory (LSTM) to diagnose IoT sensor signals and classify them into three categories: healthy, patient, and serious illness. Deep CNN-R-LSTM Algorithm is used for classify the IoT healthcare data support via a dedicated neural networking model. For our study, we have used the MIT-BIH dataset, the Pima Indians Diabetes dataset, the BP dataset, and the Cleveland Cardiology datasets. The experimental results revealed great classification performance in accuracy, specificity, and sensitivity, with 99.02 percent, 99.47 percent, and 99.56 percent, respectively. Our proposed DCLR model is based on healthcare IoT Centre inputs enhanced with the centenary, which may aid clinicians in effectively recognizing the health condition.
物联网是当今IT世界中一项引人入胜的技术,其中物品可以通过内部网或互联网传输数据并进行交互。物联网(IoT)在连接各种医疗设备、传感器和医疗保健专家以提供远程高质量医疗服务方面显示出了很大的前景。因此,患者安全得到改善,医疗保健费用下降,医疗保健服务可及性增加,医疗保健行业的运营效率提高。医疗物联网信号分析现已广泛应用于诊所,作为诊断健康问题的关键诊断工具。在医疗领域,自动识别和分类技术帮助临床医生做出更准确和及时的诊断。在本文中,我们提出了一种基于深度学习的混合网络架构(CNN- r -LSTM (DCRL)),该架构结合了卷积神经网络(CNN)和基于循环神经网络(RNN)的长短期记忆(LSTM)的特征来诊断物联网传感器信号,并将其分为三类:健康、患者和严重疾病。深度CNN-R-LSTM算法通过专用的神经网络模型对物联网医疗数据支持进行分类。在我们的研究中,我们使用了MIT-BIH数据集、皮马印第安人糖尿病数据集、BP数据集和克利夫兰心脏病学数据集。实验结果表明,该方法在准确率、特异性和敏感性方面具有良好的分类性能,分别达到99.02%、99.47%和99.56%。我们提出的DCLR模型是基于医疗物联网中心的输入,这可以帮助临床医生有效地识别健康状况。
{"title":"Deep Learning Based Hybrid Network Architecture to Diagnose IoT Sensor Signal in Healthcare System","authors":"S. S., M. S. Koti","doi":"10.53759/7669/jmc202303011","DOIUrl":"https://doi.org/10.53759/7669/jmc202303011","url":null,"abstract":"IoT is a fascinating technology in today's IT world, in which items may transmit data and interact through intranet or internet networks. TheInternet of Things (IoT) has shown a lot of promise in connecting various medical equipment, sensors, and healthcare specialists to provide high-quality medical services from afar. As a result, patient safety has improved, healthcare expenses have fallen, healthcare service accessibility has increased, and operational efficiency has increased in the healthcare industry. Healthcare IoT signal analysis is now widely employed in clinics as a critical diagnostic tool for diagnosing health issues. In the medical domain, automated identification and classification technologies help clinicians make more accurate and timely diagnoses. In this paper, we have proposed a Deep Learning-Based hybrid network architecture (CNN-R-LSTM (DCRL)) that combines the characteristics of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) based long-short-term memory (LSTM) to diagnose IoT sensor signals and classify them into three categories: healthy, patient, and serious illness. Deep CNN-R-LSTM Algorithm is used for classify the IoT healthcare data support via a dedicated neural networking model. For our study, we have used the MIT-BIH dataset, the Pima Indians Diabetes dataset, the BP dataset, and the Cleveland Cardiology datasets. The experimental results revealed great classification performance in accuracy, specificity, and sensitivity, with 99.02 percent, 99.47 percent, and 99.56 percent, respectively. Our proposed DCLR model is based on healthcare IoT Centre inputs enhanced with the centenary, which may aid clinicians in effectively recognizing the health condition.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72610449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303016
D. A, Rangaraj J
Wireless sensor networks are widely used in various Internet of Things applications, including healthcare, underwater sensor networks, body area networks, and multiple offices. Wireless Body Area Network (WBAN) simplifies medical department tasks and provides a solution that reduces the possibility of errors in the medical diagnostic process. The growing demand for real-time applications in such networks will stimulate significant research activity. Designing scenarios for such critical events while maintaining energy efficiency is difficult due to dynamic changes in network topology, strict power constraints, and limited computing power. The routing protocol design becomes crucial to WBAN and significantly impacts the communication stack and network performance. High node mobility in WBAN results in quick topology changes, affecting network scalability. Node clustering is one of many other mechanisms used in WBANs to address this issue. We consider optimization factors like distance, latency, and power consumption of IoT devices to achieve the desired CH selection. This paper proposes a high-level CH selection and routing approach using a hybrid fuzzy with a modified Rider Optimization Algorithm (MROA). This research work is implemented using MATLAB software. The simulations are carried out under a range of conditions. In terms of energy consumption and network life time, the proposed scheme outperforms current state-of-the-art techniques like Low Energy Adaptive Clustering Hierarchy (LEACH), Energy Control Routing Algorithm (ECCRA), Energy Efficient Routing Protocol (EERP), and Simplified Energy Balancing Alternative Aware Routing Algorithm (SEAR).
{"title":"Energy Efficient Clustering and Routing Using Hybrid Fuzzy with Modified Rider Optimization Algorithm in IoT - Enabled Wireless Body Area Network","authors":"D. A, Rangaraj J","doi":"10.53759/7669/jmc202303016","DOIUrl":"https://doi.org/10.53759/7669/jmc202303016","url":null,"abstract":"Wireless sensor networks are widely used in various Internet of Things applications, including healthcare, underwater sensor networks, body area networks, and multiple offices. Wireless Body Area Network (WBAN) simplifies medical department tasks and provides a solution that reduces the possibility of errors in the medical diagnostic process. The growing demand for real-time applications in such networks will stimulate significant research activity. Designing scenarios for such critical events while maintaining energy efficiency is difficult due to dynamic changes in network topology, strict power constraints, and limited computing power. The routing protocol design becomes crucial to WBAN and significantly impacts the communication stack and network performance. High node mobility in WBAN results in quick topology changes, affecting network scalability. Node clustering is one of many other mechanisms used in WBANs to address this issue. We consider optimization factors like distance, latency, and power consumption of IoT devices to achieve the desired CH selection. This paper proposes a high-level CH selection and routing approach using a hybrid fuzzy with a modified Rider Optimization Algorithm (MROA). This research work is implemented using MATLAB software. The simulations are carried out under a range of conditions. In terms of energy consumption and network life time, the proposed scheme outperforms current state-of-the-art techniques like Low Energy Adaptive Clustering Hierarchy (LEACH), Energy Control Routing Algorithm (ECCRA), Energy Efficient Routing Protocol (EERP), and Simplified Energy Balancing Alternative Aware Routing Algorithm (SEAR).","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75100864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303013
K. Sakthivel, Rajkumar Ganesan
In recent times, Wireless Body Area Networks a subsection of Wireless Sensor Network is a promising technology for future healthcare realm with cutting-edge technologies that can assist healthcare professionals like doctors, nurses and biomedical engineers. Machine Learning and Internet of Things enabled medical big data is the future of healthcare sector and Medical Technology-based industries leading to applications in other sectors such as fitness tracking for commercial purposes, Sportsperson health monitoring to track their day-to-day activities and wearable devices for critical and emergency care. This comprehensive review article addresses Wireless Body Area Network state-of-art and the dependence of Optimization Techniques and Meta-heuristic Algorithms for an efficient routing path between two sensor nodes: source node and destination node and it plays an effective role in optimizing the network parameters such as radio range, energy consumption, throughput, data aggregation, clustering and routing. Designing of energy-efficient routing for wireless body are network is such a challenging task due to uncertainty in dynamic network topology, energy constraints and limited power consumption. Optimization Techniques can help the researchers to achieve the drawbacks mentioned above and energy-efficiency of the network can be improved. In this article, we focus majorly on the efficiency of usage of optimization algorithms for Wireless Body Area Network routing mechanisms and a summary of its earlier studies during 2012-2023 epoch. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Artificial Bee Colony and Firefly Optimization algorithms were discussed on achieving local optima for better results through optimization. This article provides an insight into existing gaps and further modifications to the researchers in WBAN that can motivate them to propose new ideas for reliable solutions. Performance comparison and evaluation of different bio-inspired optimization algorithms has been discussed for further improvement in optimized routing algorithms.
近年来,无线体域网络(Wireless Body Area Networks)作为无线传感器网络(Wireless Sensor Network)的一个分支,以其尖端的技术为医生、护士和生物医学工程师等医疗保健专业人员提供帮助,是未来医疗保健领域的一项有前景的技术。机器学习和物联网支持的医疗大数据是医疗保健行业和医疗技术行业的未来,导致其他领域的应用,如用于商业目的的健身跟踪、用于跟踪其日常活动的Sportsperson健康监测以及用于关键和紧急护理的可穿戴设备。本文综合评述了无线体域网络的现状,以及优化技术和元启发式算法在源节点和目的节点两个传感器节点之间高效路由路径的依赖关系,它在优化无线电距离、能耗、吞吐量、数据聚合、聚类和路由等网络参数方面发挥了有效的作用。由于动态网络拓扑结构的不确定性、能量约束和有限的功耗限制,无线主体网络的节能路由设计是一项具有挑战性的任务。优化技术可以帮助研究人员克服上述缺点,提高网络的能量效率。在本文中,我们主要关注无线体域网络路由机制的优化算法的使用效率,并总结了2012-2023时期的早期研究。讨论了遗传算法、粒子群算法、蚁群算法、人工蜂群算法和萤火虫算法等实现局部最优,通过优化获得更好的结果。本文为WBAN研究人员提供了现有差距和进一步修改的见解,可以激励他们提出可靠解决方案的新想法。讨论了不同仿生优化算法的性能比较和评价,以进一步改进优化后的路由算法。
{"title":"An Insight on Optimization Techniques for Uncertain and Reliable Routing in Wireless Body Area Networks","authors":"K. Sakthivel, Rajkumar Ganesan","doi":"10.53759/7669/jmc202303013","DOIUrl":"https://doi.org/10.53759/7669/jmc202303013","url":null,"abstract":"In recent times, Wireless Body Area Networks a subsection of Wireless Sensor Network is a promising technology for future healthcare realm with cutting-edge technologies that can assist healthcare professionals like doctors, nurses and biomedical engineers. Machine Learning and Internet of Things enabled medical big data is the future of healthcare sector and Medical Technology-based industries leading to applications in other sectors such as fitness tracking for commercial purposes, Sportsperson health monitoring to track their day-to-day activities and wearable devices for critical and emergency care. This comprehensive review article addresses Wireless Body Area Network state-of-art and the dependence of Optimization Techniques and Meta-heuristic Algorithms for an efficient routing path between two sensor nodes: source node and destination node and it plays an effective role in optimizing the network parameters such as radio range, energy consumption, throughput, data aggregation, clustering and routing. Designing of energy-efficient routing for wireless body are network is such a challenging task due to uncertainty in dynamic network topology, energy constraints and limited power consumption. Optimization Techniques can help the researchers to achieve the drawbacks mentioned above and energy-efficiency of the network can be improved. In this article, we focus majorly on the efficiency of usage of optimization algorithms for Wireless Body Area Network routing mechanisms and a summary of its earlier studies during 2012-2023 epoch. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Artificial Bee Colony and Firefly Optimization algorithms were discussed on achieving local optima for better results through optimization. This article provides an insight into existing gaps and further modifications to the researchers in WBAN that can motivate them to propose new ideas for reliable solutions. Performance comparison and evaluation of different bio-inspired optimization algorithms has been discussed for further improvement in optimized routing algorithms.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74216798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303010
Junho Jeong
This research investigates the use of neural computing techniques in communication networks and evaluates their performance based on error rate, delay, and throughput. The results indicate that different neural computing techniques, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) and Generative Adversarial Networks (GANs) have different trade-offs in terms of their effectiveness in improving performance. The selection of technique will base on the particular requirements of the application. The research also evaluates the relative performance of different communication network architectures and identified the trade-offs and limitations associated with the application of different techniques in communication networks. The research suggests that further research is needed to explore the use of techniques, such as deep reinforcement learning; in communication networks and to investigate how the employment of techniques can be used to improve the security and robustness of communication networks.
{"title":"Performance of Neural Computing Techniques in Communication Networks","authors":"Junho Jeong","doi":"10.53759/7669/jmc202303010","DOIUrl":"https://doi.org/10.53759/7669/jmc202303010","url":null,"abstract":"This research investigates the use of neural computing techniques in communication networks and evaluates their performance based on error rate, delay, and throughput. The results indicate that different neural computing techniques, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) and Generative Adversarial Networks (GANs) have different trade-offs in terms of their effectiveness in improving performance. The selection of technique will base on the particular requirements of the application. The research also evaluates the relative performance of different communication network architectures and identified the trade-offs and limitations associated with the application of different techniques in communication networks. The research suggests that further research is needed to explore the use of techniques, such as deep reinforcement learning; in communication networks and to investigate how the employment of techniques can be used to improve the security and robustness of communication networks.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83317633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303015
Akshaya V S, Beatriz Lúcia Salvador Bizotto, Mithileysh Sathiyanarayanan
Latent Semantic Analysis (LSA) is an approach used for expressing and extracting textual meanings using statistical evaluations or modeling applied to vast corpora of text, and its development has been a major motivation for this study to understand the design thinking approach. We introduced LSA and gave some instances of how it might be used to further our knowledge of cognition and to develop practical technology. Since LSA's inception, other alternative statistical models for meaning detection and analysis in text corpora have been created, tested, and refined. This study demonstrates the value that statistical models of semantics provide to the study of cognitive science and the development of cognition. These models are particularly useful because they enable researchers to study a wide range of problems pertaining to knowledge, discourse perception, text cognition, and language using expansive representations of human intelligence.
{"title":"Human Intelligence and Value of Machine Advancements in Cognitive Science A Design thinking Approach","authors":"Akshaya V S, Beatriz Lúcia Salvador Bizotto, Mithileysh Sathiyanarayanan","doi":"10.53759/7669/jmc202303015","DOIUrl":"https://doi.org/10.53759/7669/jmc202303015","url":null,"abstract":"Latent Semantic Analysis (LSA) is an approach used for expressing and extracting textual meanings using statistical evaluations or modeling applied to vast corpora of text, and its development has been a major motivation for this study to understand the design thinking approach. We introduced LSA and gave some instances of how it might be used to further our knowledge of cognition and to develop practical technology. Since LSA's inception, other alternative statistical models for meaning detection and analysis in text corpora have been created, tested, and refined. This study demonstrates the value that statistical models of semantics provide to the study of cognitive science and the development of cognition. These models are particularly useful because they enable researchers to study a wide range of problems pertaining to knowledge, discourse perception, text cognition, and language using expansive representations of human intelligence.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90619370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303008
Thilagam T, Aruna R
Cloud Computing (CC) is a relatively new technology that allows for widespread access and storage on the internet. Despite its low cost and numerous benefits, cloud technology still confronts several obstacles, including data loss, quality concerns, and data security like recurring hacking. The security of data stored in the cloud has become a major worry for both Cloud Service Providers (CSPs) and users. As a result, a powerful Intrusion Detection System (IDS) must be set up to detect and prevent possible cloud threats at an early stage. Intending to develop a novel IDS system, this paper introduces a new optimization concept named Lion Mutated-Genetic Algorithm (LM-GA) with the hybridization of Machine Learning (ML) algorithms such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Initially, the input text data is preprocessed and balanced to avoid redundancy and vague data. The preprocessed data is then subjected to the hybrid Deep Learning (DL) models namely the CNN-LSTM model to get the IDS output. Now, the intruded are discarded and non-intruded data are secured using Advanced Encryption Standard (AES) encryption model. Besides, the optimal key selection is done by the proposed LM-GA model and the cipher text is further secured via the steganography approach. NSL-KDD and UNSW-NB15 are the datasets used to verify the performance of the proposed LM-GA-based IDS in terms of average intrusion detection rate, accuracy, precision, recall, and F-Score.
{"title":"LM-GA: A Novel IDS with AES and Machine Learning Architecture for Enhanced Cloud Storage Security","authors":"Thilagam T, Aruna R","doi":"10.53759/7669/jmc202303008","DOIUrl":"https://doi.org/10.53759/7669/jmc202303008","url":null,"abstract":"Cloud Computing (CC) is a relatively new technology that allows for widespread access and storage on the internet. Despite its low cost and numerous benefits, cloud technology still confronts several obstacles, including data loss, quality concerns, and data security like recurring hacking. The security of data stored in the cloud has become a major worry for both Cloud Service Providers (CSPs) and users. As a result, a powerful Intrusion Detection System (IDS) must be set up to detect and prevent possible cloud threats at an early stage. Intending to develop a novel IDS system, this paper introduces a new optimization concept named Lion Mutated-Genetic Algorithm (LM-GA) with the hybridization of Machine Learning (ML) algorithms such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Initially, the input text data is preprocessed and balanced to avoid redundancy and vague data. The preprocessed data is then subjected to the hybrid Deep Learning (DL) models namely the CNN-LSTM model to get the IDS output. Now, the intruded are discarded and non-intruded data are secured using Advanced Encryption Standard (AES) encryption model. Besides, the optimal key selection is done by the proposed LM-GA model and the cipher text is further secured via the steganography approach. NSL-KDD and UNSW-NB15 are the datasets used to verify the performance of the proposed LM-GA-based IDS in terms of average intrusion detection rate, accuracy, precision, recall, and F-Score.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75756556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303009
Gopal Rathinam, B. M., Arulkumar V, Kumaresan M, A. S, Bhuvana J
Cloud computing and 6G networks are in high demand at present due to their appealing features as well as the security of data stored in the cloud. There are various challenging methods that are computationally complicated that can be used in cloud security. Identity-based encryption (IBE) is the most widely used techniques for protecting data transmitted over the cloud. To prevent a malicious attack, it is an access policy that restricts access to legible data to only authorized users. The four stages of IBE are setup, key extraction or generation, decryption and encryption. Key generation is a necessary and time-consuming phase in the creation of a security key. The creation of uncrackable and non-derivable secure keys is a difficult computational and decisional task. In order to prevent user identities from being leaked, even if an opponent or attacker manages to encrypted material or to decode the key this study presents an advanced identity-based encryption technique with an equality test. The results of the experiments demonstrate that the proposed algorithm encrypts and decrypts data faster than the efficient selective-ID secure IBE strategy, a competitive approach. The proposed method's ability to conceal the identity of the user by utilizing the Lagrange coefficient, which is constituted of a polynomial interpolation function, is one of its most significant aspects.
{"title":"Enhanced Security for Large-Scale 6G Cloud Computing: A Novel Approach to Identity based Encryption Key Generation","authors":"Gopal Rathinam, B. M., Arulkumar V, Kumaresan M, A. S, Bhuvana J","doi":"10.53759/7669/jmc202303009","DOIUrl":"https://doi.org/10.53759/7669/jmc202303009","url":null,"abstract":"Cloud computing and 6G networks are in high demand at present due to their appealing features as well as the security of data stored in the cloud. There are various challenging methods that are computationally complicated that can be used in cloud security. Identity-based encryption (IBE) is the most widely used techniques for protecting data transmitted over the cloud. To prevent a malicious attack, it is an access policy that restricts access to legible data to only authorized users. The four stages of IBE are setup, key extraction or generation, decryption and encryption. Key generation is a necessary and time-consuming phase in the creation of a security key. The creation of uncrackable and non-derivable secure keys is a difficult computational and decisional task. In order to prevent user identities from being leaked, even if an opponent or attacker manages to encrypted material or to decode the key this study presents an advanced identity-based encryption technique with an equality test. The results of the experiments demonstrate that the proposed algorithm encrypts and decrypts data faster than the efficient selective-ID secure IBE strategy, a competitive approach. The proposed method's ability to conceal the identity of the user by utilizing the Lagrange coefficient, which is constituted of a polynomial interpolation function, is one of its most significant aspects.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77628519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303014
S. K, L. J., J. M, Balamurugan Easwaran
Typically, about 51% of the groundwater satisfies the drinking water worldwide and is regarded as the major source for the purpose of irrigation. Moreover, the monitoring and assessment of groundwater over bore wells is essential to identify the effect of seasonal changes, precipitations, and the extraction of water. Hence, there is a need to design a depth sensor probe for bore wells so as to analyze/monitor the quality of underground water thereby estimating any geophysical variations like landslides/earthquakes. Once the depth sensor probe is designed, the data is collected over wireless sensor network (WSN) medium and is stored in cloud for further monitoring and analyzing purposes. WSN is the major promising technologies that offer the real-time monitoring opportunities for geographical areas. The wireless medium in turn senses and gathers data like rainfall, movement, vibration, moisture, hydrological and geological aspects of soil that helps in better understanding of landslide or earthquake disasters. In this paper, the design and development of geophysical sensor probe for the deep bore well so as to monitor and collect the data like geological and hydrological conditions. The data collected is then transmitted by wireless network to analyze the geological changes which can cause natural disaster and water quality assessment.
{"title":"Design and Development of Multi-Sensor ADEP for Bore Wells Integrated with IoT Enabled Monitoring Framework","authors":"S. K, L. J., J. M, Balamurugan Easwaran","doi":"10.53759/7669/jmc202303014","DOIUrl":"https://doi.org/10.53759/7669/jmc202303014","url":null,"abstract":"Typically, about 51% of the groundwater satisfies the drinking water worldwide and is regarded as the major source for the purpose of irrigation. Moreover, the monitoring and assessment of groundwater over bore wells is essential to identify the effect of seasonal changes, precipitations, and the extraction of water. Hence, there is a need to design a depth sensor probe for bore wells so as to analyze/monitor the quality of underground water thereby estimating any geophysical variations like landslides/earthquakes. Once the depth sensor probe is designed, the data is collected over wireless sensor network (WSN) medium and is stored in cloud for further monitoring and analyzing purposes. WSN is the major promising technologies that offer the real-time monitoring opportunities for geographical areas. The wireless medium in turn senses and gathers data like rainfall, movement, vibration, moisture, hydrological and geological aspects of soil that helps in better understanding of landslide or earthquake disasters. In this paper, the design and development of geophysical sensor probe for the deep bore well so as to monitor and collect the data like geological and hydrological conditions. The data collected is then transmitted by wireless network to analyze the geological changes which can cause natural disaster and water quality assessment.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87175892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.53759/7669/jmc202303007
Seng-Phil Hong
This study investigates the performance of different numerical techniques, modeling, and simulation in solving complex problems. The study found that the Finite Element Method was found to be the most precise numerical approach for simulating the behavior of structures under loading conditions, the Finite Difference Method was found to be the most efficient numerical technique for simulating fluid flow and heat transfer problems, and the Boundary Element Method was found to be the most effective numerical technique for solving problems involving singularities, such as those found in acoustics and electromagnetics. The mathematical model established in this research was able to effectively forecast the behaviors of the system under different conditions, with an error of less than 5%. The physical model established in this research was able to replicate the behavior of the system under different conditions, with an error of less than 2%. The employment of multi-physics or multi-scale modeling was found to be effective in overcoming the limitations of traditional numerical techniques. The results of this research have significant effects for the field of numerical techniques, modeling and simulation, and can be used to guide engineers and researchers in choosing the most appropriate numerical technique for their specific problem or application.
{"title":"Different Numerical Techniques, Modeling and Simulation in Solving Complex Problems","authors":"Seng-Phil Hong","doi":"10.53759/7669/jmc202303007","DOIUrl":"https://doi.org/10.53759/7669/jmc202303007","url":null,"abstract":"This study investigates the performance of different numerical techniques, modeling, and simulation in solving complex problems. The study found that the Finite Element Method was found to be the most precise numerical approach for simulating the behavior of structures under loading conditions, the Finite Difference Method was found to be the most efficient numerical technique for simulating fluid flow and heat transfer problems, and the Boundary Element Method was found to be the most effective numerical technique for solving problems involving singularities, such as those found in acoustics and electromagnetics. The mathematical model established in this research was able to effectively forecast the behaviors of the system under different conditions, with an error of less than 5%. The physical model established in this research was able to replicate the behavior of the system under different conditions, with an error of less than 2%. The employment of multi-physics or multi-scale modeling was found to be effective in overcoming the limitations of traditional numerical techniques. The results of this research have significant effects for the field of numerical techniques, modeling and simulation, and can be used to guide engineers and researchers in choosing the most appropriate numerical technique for their specific problem or application.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84915375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.18178/ijml.2023.13.2.1129
James Rhodes, E. Doncker
Abstract —We have designed and developed an efficient priority queue data structure that utilizes buckets into which data elements are inserted and from which data elements are deleted. The data structure leverages hashing to determine the appropriate bucket to place a data element based on the data element’s key value. This allows the data structure to access data elements that are in the queue with an O(1) time complexity. Heaps access data elements that are in the queue with an O(log n) time complexity, where n is the number of nodes on the heap. Thus, the data structure improves the performance of applications that utilize a min/max heap. Targeted areas include big data applications, data science, artificial intelligence, and parallel processing. In this paper, we present results several applications. We demonstrate that the data structure when used to replace a min/max heap improves the performance applications by reducing the execution time. The performance improvement increases as the number of data elements placed in the queue increases. Also, in addition to being designed as a double-ended priority queue (DEPQ), the data structure can be configured to be a queue (FIFO), a stack (LIFO), and a set (which doesn’t allow duplicates).
{"title":"An Efficient Priority Queue Data Structure for Big Data Applications","authors":"James Rhodes, E. Doncker","doi":"10.18178/ijml.2023.13.2.1129","DOIUrl":"https://doi.org/10.18178/ijml.2023.13.2.1129","url":null,"abstract":" Abstract —We have designed and developed an efficient priority queue data structure that utilizes buckets into which data elements are inserted and from which data elements are deleted. The data structure leverages hashing to determine the appropriate bucket to place a data element based on the data element’s key value. This allows the data structure to access data elements that are in the queue with an O(1) time complexity. Heaps access data elements that are in the queue with an O(log n) time complexity, where n is the number of nodes on the heap. Thus, the data structure improves the performance of applications that utilize a min/max heap. Targeted areas include big data applications, data science, artificial intelligence, and parallel processing. In this paper, we present results several applications. We demonstrate that the data structure when used to replace a min/max heap improves the performance applications by reducing the execution time. The performance improvement increases as the number of data elements placed in the queue increases. Also, in addition to being designed as a double-ended priority queue (DEPQ), the data structure can be configured to be a queue (FIFO), a stack (LIFO), and a set (which doesn’t allow duplicates).","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"359 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75424796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}