Pub Date : 2023-01-11DOI: 10.17762/ijcnis.v14i3.5630
A. Victoria, S. V. Manikanthan, R. VaradarajuH., Muhammad Alkirom Wildan, K. Kishore
Human Activity Recognition based research has got intensified based on the evolving demand of smart systems. There has been already a lot of wearables, digital smart sensors deployed to classify various activities. Radar sensor-based Activity recognition has been an active research area during recent times. In order to classify the radar micro doppler signature images we have proposed a approach using Convolutional Neural Network-Long Short Term Memory (CNN-LSTM). Convolutional Layer is used to update the filter values to learn the features of the radar images. LSTM Layer enhances the temporal information besides the features obtained through Convolutional Neural Network. We have used a dataset published by University of Glasgow that captures six activities for 56 subjects under different ages, which is a first of its kind dataset unlike the signals captured under controlled lab environment. Our Model has achieved 96.8% for the training data and 93.5% for the testing data. The proposed work has outperformed the existing traditional deep learning Architectures.
{"title":"Radar Based Activity Recognition using CNN-LSTM Network Architecture","authors":"A. Victoria, S. V. Manikanthan, R. VaradarajuH., Muhammad Alkirom Wildan, K. Kishore","doi":"10.17762/ijcnis.v14i3.5630","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i3.5630","url":null,"abstract":"Human Activity Recognition based research has got intensified based on the evolving demand of smart systems. There has been already a lot of wearables, digital smart sensors deployed to classify various activities. Radar sensor-based Activity recognition has been an active research area during recent times. In order to classify the radar micro doppler signature images we have proposed a approach using Convolutional Neural Network-Long Short Term Memory (CNN-LSTM). Convolutional Layer is used to update the filter values to learn the features of the radar images. LSTM Layer enhances the temporal information besides the features obtained through Convolutional Neural Network. We have used a dataset published by University of Glasgow that captures six activities for 56 subjects under different ages, which is a first of its kind dataset unlike the signals captured under controlled lab environment. Our Model has achieved 96.8% for the training data and 93.5% for the testing data. The proposed work has outperformed the existing traditional deep learning Architectures.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"293 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130820117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-08DOI: 10.17762/ijcnis.v14i1s.5589
V. T. R. P. Ku, M. Arulselvi, K. Sastry
Colon cancer is the second leading dreadful disease-causing death. The challenge in the colon cancer detection is the accurate identification of the lesion at the early stage such that mortality and morbidity can be reduced. In this work, a colon cancer classification method is identified out using Dragonfly-based water wave optimization (DWWO) based deep recurrent neural network. Initially, the input cancer images subjected to carry a pre-processing, in which outer artifacts are removed. The pre-processed image is forwarded for segmentation then the images are converted into segments using Generative adversarial networks (GAN). The obtained segments are forwarded for attribute selection module, where the statistical features like mean, variance, kurtosis, entropy, and textual features, like LOOP features are effectively extracted. Finally, the colon cancer classification is solved by using the deep RNN, which is trained by the proposed Dragonfly-based water wave optimization algorithm. The proposed DWWO algorithm is developed by integrating the Dragonfly algorithm and water wave optimization.
结肠癌是导致死亡的第二大可怕疾病。结肠癌检测面临的挑战是在早期阶段准确识别病变,从而降低死亡率和发病率。本文提出了一种基于蜻蜓水波优化(dragonfly water wave optimization, DWWO)的深度递归神经网络结肠癌分类方法。首先,对输入的癌症图像进行预处理,去除外部的伪影。将预处理后的图像转发进行分割,然后使用生成式对抗网络(GAN)将图像转换为片段。将得到的片段转发给属性选择模块,在属性选择模块中有效提取均值、方差、峰度、熵等统计特征和LOOP特征等文本特征。最后,利用基于蜻蜓的水波优化算法训练的深度RNN解决结肠癌分类问题。该算法是将蜻蜓算法与水波优化相结合而开发的。
{"title":"An Optimized Deep Learning Based Optimization Algorithm for the Detection of Colon Cancer Using Deep Recurrent Neural Networks","authors":"V. T. R. P. Ku, M. Arulselvi, K. Sastry","doi":"10.17762/ijcnis.v14i1s.5589","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5589","url":null,"abstract":"Colon cancer is the second leading dreadful disease-causing death. The challenge in the colon cancer detection is the accurate identification of the lesion at the early stage such that mortality and morbidity can be reduced. In this work, a colon cancer classification method is identified out using Dragonfly-based water wave optimization (DWWO) based deep recurrent neural network. Initially, the input cancer images subjected to carry a pre-processing, in which outer artifacts are removed. The pre-processed image is forwarded for segmentation then the images are converted into segments using Generative adversarial networks (GAN). The obtained segments are forwarded for attribute selection module, where the statistical features like mean, variance, kurtosis, entropy, and textual features, like LOOP features are effectively extracted. Finally, the colon cancer classification is solved by using the deep RNN, which is trained by the proposed Dragonfly-based water wave optimization algorithm. The proposed DWWO algorithm is developed by integrating the Dragonfly algorithm and water wave optimization.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127484421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-08DOI: 10.17762/ijcnis.v14i1s.5586
Bhargavi Goparaju, Bandla Sreenivasa Rao
Distributed denial-of-service attack (DDoS) is one of the most frequently occurring network attacks. Because of rapid growth in the communication and computer technology, the DDoS attacks became severe. So, it is essential to research the detection of a DDoS attack. There are different modes of DDoS attacks because of which a single method cannot provide good security. To overcome this, a DDoS attack detection technique is presented in this paper using machine learning algorithm. The proposed method has two phases, dimensionality reduction and model training for attack detection. The first phase identifies important components from the large proportion of the internet data. These extracted components are used as machine learning’s input features in the phase of model detection. Support Vector Machine (SVM) algorithm is used to train the features and learn the model. The experimental results shows that the proposed method detects DDoS attacks with good accuracy.
{"title":"A DDoS Attack Detection using PCA Dimensionality Reduction and Support Vector Machine","authors":"Bhargavi Goparaju, Bandla Sreenivasa Rao","doi":"10.17762/ijcnis.v14i1s.5586","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5586","url":null,"abstract":"Distributed denial-of-service attack (DDoS) is one of the most frequently occurring network attacks. Because of rapid growth in the communication and computer technology, the DDoS attacks became severe. So, it is essential to research the detection of a DDoS attack. There are different modes of DDoS attacks because of which a single method cannot provide good security. To overcome this, a DDoS attack detection technique is presented in this paper using machine learning algorithm. The proposed method has two phases, dimensionality reduction and model training for attack detection. The first phase identifies important components from the large proportion of the internet data. These extracted components are used as machine learning’s input features in the phase of model detection. Support Vector Machine (SVM) algorithm is used to train the features and learn the model. The experimental results shows that the proposed method detects DDoS attacks with good accuracy.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129975064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-08DOI: 10.17762/ijcnis.v14i1s.5592
N. Rao, Lalitha Bhavani Konkyana, V. Raju, M.S.R. Naidu, Chukka Ramesh Babu
This paper proposes a Broadband Meta surface-based MIMO Antenna with High Gain and Isolation For 5G Millimeter applications. A single antenna is transformed into an array configuration to improve gain. As a result, each MIMO antenna is made up of a 1x2 element array supplied by a concurrent feedline. A 9x6 Split Ring Resonator (SRR) elongated cell is stacked above the antenna to improve gain and eliminate the coupling effects between the MIMO components. The substrate Rogers 5880 with a thickness of 0.787mm and 1.6mm is used for the antenna and meta surface. Furthermore, antenna performance is assessed using S-parameters, MIMO characteristics, and radiation patterns. The final designed antenna supports 5G applications by embracing the mm-wave frequency spectrum at Ka-band, there is a noticeable increase in gain. In addition, once the meta surface is introduced, there is an improvement in isolation.
{"title":"A Broadband Meta surface Based MIMO Antenna with High Gain and Isolation For 5G Millimeter Wave Applications","authors":"N. Rao, Lalitha Bhavani Konkyana, V. Raju, M.S.R. Naidu, Chukka Ramesh Babu","doi":"10.17762/ijcnis.v14i1s.5592","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5592","url":null,"abstract":"This paper proposes a Broadband Meta surface-based MIMO Antenna with High Gain and Isolation For 5G Millimeter applications. A single antenna is transformed into an array configuration to improve gain. As a result, each MIMO antenna is made up of a 1x2 element array supplied by a concurrent feedline. A 9x6 Split Ring Resonator (SRR) elongated cell is stacked above the antenna to improve gain and eliminate the coupling effects between the MIMO components. The substrate Rogers 5880 with a thickness of 0.787mm and 1.6mm is used for the antenna and meta surface. Furthermore, antenna performance is assessed using S-parameters, MIMO characteristics, and radiation patterns. The final designed antenna supports 5G applications by embracing the mm-wave frequency spectrum at Ka-band, there is a noticeable increase in gain. In addition, once the meta surface is introduced, there is an improvement in isolation. ","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126988216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-08DOI: 10.17762/ijcnis.v14i1s.5590
T. Nagalaxmi, E. S. Rao, P. Chandrasekhar
The Network on Chip is appropriate where System-on-Chip technology is scalable and adaptable. The Network on Chip is a new communication architecture with a number of benefits, including scalability, flexibility, and reusability, for applications built on Multiprocessor System on a Chip (MPSoC). However, the design of efficient NoC fabric with high performance is critically complex because of its architectural parameters. Identifying a suitable scheduling algorithm to resolve arbitration among ports to obtain high-speed data transfer in the router is one of the most significant phases while designing a Network on chip based Multiprocessor System on a Chip. Low latency, throughput, space utilization, energy consumption, and reliability for Network on chip fabric are all determined by the router. The performance of the NoC system is hampered by the deadlock issues that plague conventional routing algorithms. This work develops a novel routing algorithm to address the deadlock problem. In this paper, a deterministic shortest path deadlock-free routing method is developed based on the analysis of the Turn Model. In the 2D-mesh structure, the algorithm uses separate routing methods for the odd and even columns. This minimizes the number of paths for a single channel, congestion, and latency. Two test scenarios—one with and one without a load test—were used to evaluate the proposed model. For a zero-load network, three clock cycles are utilized to transfer the packets. For the load network, five clocks are utilized to transfer the packets. The latency is measured for both cases without load and with load test and the corresponding latency is 3ns and 7ns respectively.The proposed method has an 18.57Mbps throughput. The area and power utilization for the proposed method are 69% (IO utilization) and 0.128W respectively. In order to validate the proposed method, the latency is compared with existing work and 50% latency is reduced both with and without congestion load.
{"title":"Design and Performance Analysis of Low Latency Routing Algorithm based NoC for MPSoC","authors":"T. Nagalaxmi, E. S. Rao, P. Chandrasekhar","doi":"10.17762/ijcnis.v14i1s.5590","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5590","url":null,"abstract":"The Network on Chip is appropriate where System-on-Chip technology is scalable and adaptable. The Network on Chip is a new communication architecture with a number of benefits, including scalability, flexibility, and reusability, for applications built on Multiprocessor System on a Chip (MPSoC). However, the design of efficient NoC fabric with high performance is critically complex because of its architectural parameters. Identifying a suitable scheduling algorithm to resolve arbitration among ports to obtain high-speed data transfer in the router is one of the most significant phases while designing a Network on chip based Multiprocessor System on a Chip. Low latency, throughput, space utilization, energy consumption, and reliability for Network on chip fabric are all determined by the router. The performance of the NoC system is hampered by the deadlock issues that plague conventional routing algorithms. This work develops a novel routing algorithm to address the deadlock problem. In this paper, a deterministic shortest path deadlock-free routing method is developed based on the analysis of the Turn Model. In the 2D-mesh structure, the algorithm uses separate routing methods for the odd and even columns. This minimizes the number of paths for a single channel, congestion, and latency. Two test scenarios—one with and one without a load test—were used to evaluate the proposed model. For a zero-load network, three clock cycles are utilized to transfer the packets. For the load network, five clocks are utilized to transfer the packets. The latency is measured for both cases without load and with load test and the corresponding latency is 3ns and 7ns respectively.The proposed method has an 18.57Mbps throughput. The area and power utilization for the proposed method are 69% (IO utilization) and 0.128W respectively. In order to validate the proposed method, the latency is compared with existing work and 50% latency is reduced both with and without congestion load.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133744135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-08DOI: 10.17762/ijcnis.v14i1s.5588
A. Suganya., S. Aarthy
Neurodegenerative disorders present a current challenge for accurate diagnosis and for providing precise prognostic information. Alzheimer’s disease (AD) and Parkinson's disease (PD), may take several years to obtain a definitive diagnosis. Due to the increased aging population in developed countries, neurodegenerative diseases such as AD and PD have become more prevalent and thus new technologies and more accurate tests are needed to improve and accelerate the diagnostic procedure in the early stages of these diseases. Deep learning has shown significant promise in computer-assisted AD and PD diagnosis based on MRI with the widespread use of artificial intelligence in the medical domain. This article analyses and evaluates the effectiveness of existing Deep learning (DL)-based approaches to identify neurological illnesses using MRI data obtained using various modalities, including functional and structural MRI. Several current research issues are identified toward the conclusion, along with several potential future study directions.
{"title":"Alzheimer's And Parkinson's Disease Classification Using Deep Learning Based On MRI: A Review","authors":"A. Suganya., S. Aarthy","doi":"10.17762/ijcnis.v14i1s.5588","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5588","url":null,"abstract":"Neurodegenerative disorders present a current challenge for accurate diagnosis and for providing precise prognostic information. Alzheimer’s disease (AD) and Parkinson's disease (PD), may take several years to obtain a definitive diagnosis. Due to the increased aging population in developed countries, neurodegenerative diseases such as AD and PD have become more prevalent and thus new technologies and more accurate tests are needed to improve and accelerate the diagnostic procedure in the early stages of these diseases. Deep learning has shown significant promise in computer-assisted AD and PD diagnosis based on MRI with the widespread use of artificial intelligence in the medical domain. This article analyses and evaluates the effectiveness of existing Deep learning (DL)-based approaches to identify neurological illnesses using MRI data obtained using various modalities, including functional and structural MRI. Several current research issues are identified toward the conclusion, along with several potential future study directions.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116133928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.17762/ijcnis.v14i3.5607
A. Yadav, Bhanu Sharma, Akash Kumar Bhagat, Harshal Shah, C. Manjunath, Aishwarya Awasthi
With the advancement of Internet of Things (IoT), the network devices seem to be raising, and the cloud data centre load also raises; certain delay-sensitive services are not responded to promptly which leads to a reduced quality of service (QoS). The technique of resource estimation could offer the appropriate source for users through analyses of load of resource itself. Thus, the prediction of resource QoS was important to user fulfillment and task allotment in edge computing. This study develops a new manta ray foraging optimization with backpropagation neural network (MRFO-BPNN) model for resource estimation using quality of service (QoS) in the edge computing platform. Primarily, the MRFO-BPNN model makes use of BPNN algorithm for the estimation of resources in edge computing. Besides, the parameters relevant to the BPNN model are adjusted effectually by the use of MRFO algorithm. Moreover, an objective function is derived for the MRFO algorithm for the investigation of load state changes and choosing proper ones. To facilitate the enhanced performance of the MRFO-BPNN model, a widespread experimental analysis is made. The comprehensive comparison study highlighted the excellency of the MRFO-BPNN model.
{"title":"Edge Computing in Centralized Data Server Deployment for Network Qos and Latency Improvement for Virtualization Environment","authors":"A. Yadav, Bhanu Sharma, Akash Kumar Bhagat, Harshal Shah, C. Manjunath, Aishwarya Awasthi","doi":"10.17762/ijcnis.v14i3.5607","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i3.5607","url":null,"abstract":"With the advancement of Internet of Things (IoT), the network devices seem to be raising, and the cloud data centre load also raises; certain delay-sensitive services are not responded to promptly which leads to a reduced quality of service (QoS). The technique of resource estimation could offer the appropriate source for users through analyses of load of resource itself. Thus, the prediction of resource QoS was important to user fulfillment and task allotment in edge computing. This study develops a new manta ray foraging optimization with backpropagation neural network (MRFO-BPNN) model for resource estimation using quality of service (QoS) in the edge computing platform. Primarily, the MRFO-BPNN model makes use of BPNN algorithm for the estimation of resources in edge computing. Besides, the parameters relevant to the BPNN model are adjusted effectually by the use of MRFO algorithm. Moreover, an objective function is derived for the MRFO algorithm for the investigation of load state changes and choosing proper ones. To facilitate the enhanced performance of the MRFO-BPNN model, a widespread experimental analysis is made. The comprehensive comparison study highlighted the excellency of the MRFO-BPNN model.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131489727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-31DOI: 10.17762/ijcnis.v14i3.5604
S. Mubeen, Nandini Kulkarni, Manuel R. Tanpoco, R. D. Kumar, M. Naidu, T. Dhope
A crucial area of research that can reveal numerous useful insights is emotional recognition. Several visible ways, including speech, gestures, written material, and facial expressions, can be used to portray emotion. Natural language processing (NLP) and DL concepts are utilised in the content-based categorization problem that is at the core of emotion recognition in text documents.This research propose novel technique in linguistic based emotion detection by social media using metaheuristic deep learning architectures. Here the input has been collected as live social media data and processed for noise removal, smoothening and dimensionality reduction. Processed data has been extracted and classified using metaheuristic swarm regressive adversarial kernel component analysis. Experimental analysis has been carried out in terms of precision, accuracy, recall, F-1 score, RMSE and MAP for various social media dataset.
{"title":"Linguistic Based Emotion Detection from Live Social Media Data Classification Using Metaheuristic Deep Learning Techniques","authors":"S. Mubeen, Nandini Kulkarni, Manuel R. Tanpoco, R. D. Kumar, M. Naidu, T. Dhope","doi":"10.17762/ijcnis.v14i3.5604","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i3.5604","url":null,"abstract":"A crucial area of research that can reveal numerous useful insights is emotional recognition. Several visible ways, including speech, gestures, written material, and facial expressions, can be used to portray emotion. Natural language processing (NLP) and DL concepts are utilised in the content-based categorization problem that is at the core of emotion recognition in text documents.This research propose novel technique in linguistic based emotion detection by social media using metaheuristic deep learning architectures. Here the input has been collected as live social media data and processed for noise removal, smoothening and dimensionality reduction. Processed data has been extracted and classified using metaheuristic swarm regressive adversarial kernel component analysis. Experimental analysis has been carried out in terms of precision, accuracy, recall, F-1 score, RMSE and MAP for various social media dataset.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133028856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-31DOI: 10.17762/ijcnis.v14i3.5601
Rahul Bhatt, Rishi Shikka, R. ManjunathC., S. Sharma, Arvind Kumar Pandey, K. Bala
Due to the expansion of Internet of Things (IoT), the extensive wireless, and 4G networks, the rising demands for computing calls and data communication for the emergent EC (EC) model. By stirring the functions and services positioned in the cloud to the user proximity, EC could offer robust transmission, networking, storage, and transmission capability. The resource scheduling in EC, which is crucial to the accomplishment of EC system, has gained considerable attention. This manuscript introduces a new lighting attachment algorithm based resource scheduling scheme and data integrity (LAARSS-DI) for 4G IoT environment. In this work, we introduce the LAARSS-DI technique to proficiently handle and allot resources in the 4G IoT environment. In addition, the LAARSS-DI technique mainly relies on the standard LAA where the lightning can be caused using the overall amount of charges saved in the cloud that leads to a rise in electrical intensity. Followed by, the LAARSS-DI technique designs an objective function for the reduction of cost involved in the scheduling process, particularly for 4G IoT environment. A series of experimentation analyses is made and the outcomes are inspected under several aspects. The comparison study shown the improved performance of the LAARSS-DI technology to existing approaches.
{"title":"Centralized Cloud Service Providers in Improving Resource Allocation and Data Integrity by 4G IoT Paradigm","authors":"Rahul Bhatt, Rishi Shikka, R. ManjunathC., S. Sharma, Arvind Kumar Pandey, K. Bala","doi":"10.17762/ijcnis.v14i3.5601","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i3.5601","url":null,"abstract":"Due to the expansion of Internet of Things (IoT), the extensive wireless, and 4G networks, the rising demands for computing calls and data communication for the emergent EC (EC) model. By stirring the functions and services positioned in the cloud to the user proximity, EC could offer robust transmission, networking, storage, and transmission capability. The resource scheduling in EC, which is crucial to the accomplishment of EC system, has gained considerable attention. This manuscript introduces a new lighting attachment algorithm based resource scheduling scheme and data integrity (LAARSS-DI) for 4G IoT environment. In this work, we introduce the LAARSS-DI technique to proficiently handle and allot resources in the 4G IoT environment. In addition, the LAARSS-DI technique mainly relies on the standard LAA where the lightning can be caused using the overall amount of charges saved in the cloud that leads to a rise in electrical intensity. Followed by, the LAARSS-DI technique designs an objective function for the reduction of cost involved in the scheduling process, particularly for 4G IoT environment. A series of experimentation analyses is made and the outcomes are inspected under several aspects. The comparison study shown the improved performance of the LAARSS-DI technology to existing approaches.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115021081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-31DOI: 10.17762/ijcnis.v14i1s.5639
S. Waris, S. Koteeswaran
Data analysis is important for managing a lot of knowledge in the healthcare industry. The older medical study favored prediction over processing and assimilating a massive volume of hospital data. The precise research of health data becomes advantageous for early disease identification and patient treatment as a result of the tremendous knowledge expansion in the biological and healthcare fields. But when there are gaps in the medical data, the accuracy suffers. The use of K-means algorithm is modest and efficient to perform. It is appropriate for processing vast quantities of continuous, high-dimensional numerical data. However, the number of clusters in the given dataset must be predetermined for this technique, and choosing the right K is frequently challenging. The cluster centers chosen in the first phase have an impact on the clustering results as well. To overcome this drawback in k-means to modify the initialization and centroid steps in classification technique with combining (Convolutional neural network) CNN and ELM (extreme learning machine) technique is used. To increase this work, disease risk prediction using repository dataset is proposed. We use different types of machine learning algorithm for predicting disease using structured data. The prediction accuracy of using proposed hybrid model is 99.8% which is more than SVM (support vector machine), KNN (k-nearest neighbors), AB (AdaBoost algorithm) and CKN-CNN (consensus K-nearest neighbor algorithm and convolution neural network).
{"title":"An Investigation on Disease Diagnosis and Prediction by Using Modified K-Mean clustering and Combined CNN and ELM Classification Techniques","authors":"S. Waris, S. Koteeswaran","doi":"10.17762/ijcnis.v14i1s.5639","DOIUrl":"https://doi.org/10.17762/ijcnis.v14i1s.5639","url":null,"abstract":"Data analysis is important for managing a lot of knowledge in the healthcare industry. The older medical study favored prediction over processing and assimilating a massive volume of hospital data. The precise research of health data becomes advantageous for early disease identification and patient treatment as a result of the tremendous knowledge expansion in the biological and healthcare fields. But when there are gaps in the medical data, the accuracy suffers. The use of K-means algorithm is modest and efficient to perform. It is appropriate for processing vast quantities of continuous, high-dimensional numerical data. However, the number of clusters in the given dataset must be predetermined for this technique, and choosing the right K is frequently challenging. The cluster centers chosen in the first phase have an impact on the clustering results as well. To overcome this drawback in k-means to modify the initialization and centroid steps in classification technique with combining (Convolutional neural network) CNN and ELM (extreme learning machine) technique is used. To increase this work, disease risk prediction using repository dataset is proposed. We use different types of machine learning algorithm for predicting disease using structured data. The prediction accuracy of using proposed hybrid model is 99.8% which is more than SVM (support vector machine), KNN (k-nearest neighbors), AB (AdaBoost algorithm) and CKN-CNN (consensus K-nearest neighbor algorithm and convolution neural network).","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125700232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}