The liver is the second largest organ in the human body after the skin and liver disease mainly impacts the liver's functionality by properly separating the nutrients and waste into the digestive system and also causes scarring (cirrhosis) as time passes. The scarring over time affects the healthy liver tissue and also affects its proper functioning and if left untreated for a prolonged period it can also result in severe complications such as liver failure or liver cancer. The patients can be prevented from the severe complications if the disease is detected at an earlier stage and the existing research for liver disease prediction mainly encouraged the usage of intelligent machine learning‐based techniques. However, these techniques have several complexities such as low accuracy, overfitting, higher training time, poor feature extraction capabilities and so on. To overcome these problems, we present modified long short term emory (MLSTM) architecture for chronic liver disease prediction. The proposed methodology has three stages: information enhancement, feature extraction, and classification. The modified generative adversarial network uses an autoencoder system for sample augmentation which helps to enrich the diversity present in both the normal and abnormal classes. The outlier information is eliminated via the criminal search algorithm which captures the differences and correlation associated with multiple samples. The fast independent component analysis algorithm and enhanced whale optimization algorithm are used for feature extraction. This step mainly identifies the crucial features for liver disease prediction and leaves out the irrelevant and duplicate features thus enhancing the convergence, computational time, and prediction accuracy. The MLSTM architecture is used to classify the samples present in the liver disease datasets into normal and abnormal (liver disease) classes. The proposed methodology offers improved performance in terms of accuracy, recall, means square error, and F‐measure. The results show that the proposed methodology will be efficient for doctors to diagnose liver disease in the earlier stage.
{"title":"A novel modified long short term memory architecture for automatic liver disease prediction from patient records","authors":"V. A. A. Daniel, Ravi Ramaraj","doi":"10.1002/cpe.7372","DOIUrl":"https://doi.org/10.1002/cpe.7372","url":null,"abstract":"The liver is the second largest organ in the human body after the skin and liver disease mainly impacts the liver's functionality by properly separating the nutrients and waste into the digestive system and also causes scarring (cirrhosis) as time passes. The scarring over time affects the healthy liver tissue and also affects its proper functioning and if left untreated for a prolonged period it can also result in severe complications such as liver failure or liver cancer. The patients can be prevented from the severe complications if the disease is detected at an earlier stage and the existing research for liver disease prediction mainly encouraged the usage of intelligent machine learning‐based techniques. However, these techniques have several complexities such as low accuracy, overfitting, higher training time, poor feature extraction capabilities and so on. To overcome these problems, we present modified long short term emory (MLSTM) architecture for chronic liver disease prediction. The proposed methodology has three stages: information enhancement, feature extraction, and classification. The modified generative adversarial network uses an autoencoder system for sample augmentation which helps to enrich the diversity present in both the normal and abnormal classes. The outlier information is eliminated via the criminal search algorithm which captures the differences and correlation associated with multiple samples. The fast independent component analysis algorithm and enhanced whale optimization algorithm are used for feature extraction. This step mainly identifies the crucial features for liver disease prediction and leaves out the irrelevant and duplicate features thus enhancing the convergence, computational time, and prediction accuracy. The MLSTM architecture is used to classify the samples present in the liver disease datasets into normal and abnormal (liver disease) classes. The proposed methodology offers improved performance in terms of accuracy, recall, means square error, and F‐measure. The results show that the proposed methodology will be efficient for doctors to diagnose liver disease in the earlier stage.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77368248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. J. Jagannath, Raveena Judie Dolly, G. S. Let, James Dinesh Peter
Smart healthcare systems do exist with a variety of architectures. However, the hunt for better smart healthcare systems is more predominant. The cutting‐edge field of IoT (internet of things) and technological developments provide better solutions for smart healthcare systems using Sensor–Body Area Networks. Thus, the patient's sensor data can be collected, stored, analyzed, and suitable treatments can be offered, over the inter‐network, anytime, anywhere. The most complex part in such systems is the physician analysis of the huge volume of patient's data, to handle and prepare suitable diagnose and treatment for humanity. This article reveals a methodology of Deep Reinforcement Learning for smart healthcare decisions in an IoT interfaced Smart Healthcare–intelligent monitoring system. The system incorporates four layers, patient data collection, Edge computing, patient data transmission and Cloud computing. IoT is employed for automatic collection of Patient's data and for transmission of data, to data centers. Artificial intelligence techniques are used to analyze these data to provide suitable decisions, diagnosis, and treatment for those patients and humanity. Deep Reinforcement Learning provides the platform for smart decisions, diagnosis, and treatment. The investigation was experimented with synthetic simulated data of various BAN sensors. We developed a data set of size 286, which contains 21 different health parameters. After pre‐processing, these data were stored in the Amazon web services (AWS) cloud server using (message queue telemetry and transport) MQTT–IoT protocol. Initially, the Deep Q‐Network (DQN) was imposed to the training algorithm. The methodology was examined in PyTorch using a single GTX 1080 Ti X GPU with the training data sizes from 27 to 1536. The training time was about 10,000 to 90,000 s for training 500 epochs. In the high dimensional action space environment, the algorithm responded slowly to analyze, explore, and determine effective healthcare strategies. The systems convergence response of estimated hidden health state (g') and the actual health state (g) for the 21 different health parameters were estimated, whose values range from 0 to 1. The system responded with smart decisive interventions, which were good and close to that of a Physician's decision. The proposed methodology is definitely a promising solution for a smart and economic telemedicine.
智能医疗保健系统确实存在各种各样的架构。然而,寻找更好的智能医疗保健系统更为重要。物联网(IoT)的前沿领域和技术发展为使用传感器身体区域网络的智能医疗系统提供了更好的解决方案。因此,患者的传感器数据可以被收集、存储、分析,并可以通过网络随时随地提供合适的治疗。在这些系统中,最复杂的部分是医生对大量患者数据的分析,以处理和准备适合人类的诊断和治疗。本文揭示了一种在物联网接口的智能医疗智能监控系统中用于智能医疗决策的深度强化学习方法。该系统包含患者数据采集、边缘计算、患者数据传输和云计算四层。物联网用于自动收集患者数据并将数据传输到数据中心。人工智能技术用于分析这些数据,为患者和人类提供合适的决策、诊断和治疗。深度强化学习为智能决策、诊断和治疗提供了平台。用各种BAN传感器的综合模拟数据进行了实验。我们开发了一个大小为286的数据集,其中包含21种不同的健康参数。预处理后,这些数据使用(消息队列遥测和传输)MQTT-IoT协议存储在亚马逊网络服务(AWS)云服务器中。最初,将深度Q网络(Deep Q‐Network, DQN)应用于训练算法。该方法在PyTorch中使用单个GTX 1080 Ti X GPU进行检查,训练数据大小从27到1536。训练500次,训练时间约为10000 ~ 90000 s。在高维动作空间环境中,该算法在分析、探索和确定有效的医疗策略时反应缓慢。对21个不同的健康参数(取值范围为0 ~ 1)估计的隐健康状态(g′)和实际健康状态(g)的系统收敛响应进行了估计。该系统做出了智能的果断干预,效果很好,接近于医生的决定。所提出的方法绝对是一个有前途的解决方案,智能和经济的远程医疗。
{"title":"An IoT enabled smart healthcare system using deep reinforcement learning","authors":"D. J. Jagannath, Raveena Judie Dolly, G. S. Let, James Dinesh Peter","doi":"10.1002/cpe.7403","DOIUrl":"https://doi.org/10.1002/cpe.7403","url":null,"abstract":"Smart healthcare systems do exist with a variety of architectures. However, the hunt for better smart healthcare systems is more predominant. The cutting‐edge field of IoT (internet of things) and technological developments provide better solutions for smart healthcare systems using Sensor–Body Area Networks. Thus, the patient's sensor data can be collected, stored, analyzed, and suitable treatments can be offered, over the inter‐network, anytime, anywhere. The most complex part in such systems is the physician analysis of the huge volume of patient's data, to handle and prepare suitable diagnose and treatment for humanity. This article reveals a methodology of Deep Reinforcement Learning for smart healthcare decisions in an IoT interfaced Smart Healthcare–intelligent monitoring system. The system incorporates four layers, patient data collection, Edge computing, patient data transmission and Cloud computing. IoT is employed for automatic collection of Patient's data and for transmission of data, to data centers. Artificial intelligence techniques are used to analyze these data to provide suitable decisions, diagnosis, and treatment for those patients and humanity. Deep Reinforcement Learning provides the platform for smart decisions, diagnosis, and treatment. The investigation was experimented with synthetic simulated data of various BAN sensors. We developed a data set of size 286, which contains 21 different health parameters. After pre‐processing, these data were stored in the Amazon web services (AWS) cloud server using (message queue telemetry and transport) MQTT–IoT protocol. Initially, the Deep Q‐Network (DQN) was imposed to the training algorithm. The methodology was examined in PyTorch using a single GTX 1080 Ti X GPU with the training data sizes from 27 to 1536. The training time was about 10,000 to 90,000 s for training 500 epochs. In the high dimensional action space environment, the algorithm responded slowly to analyze, explore, and determine effective healthcare strategies. The systems convergence response of estimated hidden health state (g') and the actual health state (g) for the 21 different health parameters were estimated, whose values range from 0 to 1. The system responded with smart decisive interventions, which were good and close to that of a Physician's decision. The proposed methodology is definitely a promising solution for a smart and economic telemedicine.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86211062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, several healthcare organizations use decision‐making systems based on electronic health record (EHR) data in order to guarantee patient's safety and improve the quality of healthcare. In essence, the evolutions of Internet of Things (IoT) technologies have been of great help for implementing an integrated and interoperable decision‐making system based on EHR and medical devices (MDs). Those IoT‐based systems allow Clinicians collecting real‐time health data and provide accurate patient's monitoring. Nevertheless, several studies have shown that it is hard to improve the quality of healthcare using the current EHR IoT‐based systems since they do not allow to easily express clinician needs. Interactive visualization tools have been proposed to improve the efficacy and utility of these EHR based systems. However, there is no framework that provides a visual summary of patient data to clinician for planning specific clinical tasks, subsequently evaluating clinician responses, visually exploring EHR data and MDs data, gaining insights, supporting dynamic coordination processes care, and forming and validating hypotheses and risks. This article addresses this problem and introduces SIMCard, an aggregation‐based connected EHR visualization framework for patient monitoring, interpreting and predicting with MDs. The proposed framework aims to synthesize patient's clinical data into a single aggregating model for both EHR and MD conforming to health standard and terminologies. It also allows to link the aggregating model to the relevant medical knowledge in order to provide a connected and dynamic care and preventive plan. Last but not least, it provides an aggregated visualization model capable of displaying graphically a patient's personal data from databases, healthcare devices and sensors to reduce cognitive barriers related to the complexity of medical information and interpretation of health data. To demonstrate the refinement and design of our system and to observe user's actual practice of visualizing and analyzing real‐world dataset, we evaluated our system and compare to existing ones.
{"title":"SIMCard: Toward better connected electronic health record visualization","authors":"S. Sassi, R. Chbeir","doi":"10.1002/cpe.7399","DOIUrl":"https://doi.org/10.1002/cpe.7399","url":null,"abstract":"Recently, several healthcare organizations use decision‐making systems based on electronic health record (EHR) data in order to guarantee patient's safety and improve the quality of healthcare. In essence, the evolutions of Internet of Things (IoT) technologies have been of great help for implementing an integrated and interoperable decision‐making system based on EHR and medical devices (MDs). Those IoT‐based systems allow Clinicians collecting real‐time health data and provide accurate patient's monitoring. Nevertheless, several studies have shown that it is hard to improve the quality of healthcare using the current EHR IoT‐based systems since they do not allow to easily express clinician needs. Interactive visualization tools have been proposed to improve the efficacy and utility of these EHR based systems. However, there is no framework that provides a visual summary of patient data to clinician for planning specific clinical tasks, subsequently evaluating clinician responses, visually exploring EHR data and MDs data, gaining insights, supporting dynamic coordination processes care, and forming and validating hypotheses and risks. This article addresses this problem and introduces SIMCard, an aggregation‐based connected EHR visualization framework for patient monitoring, interpreting and predicting with MDs. The proposed framework aims to synthesize patient's clinical data into a single aggregating model for both EHR and MD conforming to health standard and terminologies. It also allows to link the aggregating model to the relevant medical knowledge in order to provide a connected and dynamic care and preventive plan. Last but not least, it provides an aggregated visualization model capable of displaying graphically a patient's personal data from databases, healthcare devices and sensors to reduce cognitive barriers related to the complexity of medical information and interpretation of health data. To demonstrate the refinement and design of our system and to observe user's actual practice of visualizing and analyzing real‐world dataset, we evaluated our system and compare to existing ones.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90306480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, wireless sensor networks (WSNs) and Internet of Things (IoTs) have emanated as an indispensable assets that play a critical role in revolutionizing the field of data communication. Owing to the evolution of communication standards, research trends in IoT based wireless sensor networks have been rapidly progressing toward achieving effective data routing with a prolonged network lifetime and minimized energy consumption. In this article, an optimized ticket manager based energy‐aware multipath routing protocol (TMERP) is proposed. The proposed protocol design comprises three important functional entities: ticket manager (TM), routing planner (RP), and backup node (BN). The TM is responsible for controlling and monitoring all the constraints related to networking. Then, the RP minimizes the overall complexity of the optimal resource allocation by avoiding an end‐to‐end delay. Finally, the BN facilitates efficient data routing through the optimal selection of routing paths using the node trust evaluation and backup process to minimize data loss. Hence, the proposed multipath routing system has a distinct advantage in enhancing the network lifetime constraint with minimal energy consumption owing to the collective performance of its functional entities. The simulation results of the experimental studies show that the proposed protocol design achieved an improved performance in terms of network energy, throughput, and network operational lifetime by 39.3%, 47.9%, and 10.5%, respectively when compared with similar existing protocols.
{"title":"An optimized ticket manager based energy‐aware multipath routing protocol design for IoT based wireless sensor networks","authors":"M. Roberts, Jayapratha Thangavel","doi":"10.1002/cpe.7398","DOIUrl":"https://doi.org/10.1002/cpe.7398","url":null,"abstract":"Recently, wireless sensor networks (WSNs) and Internet of Things (IoTs) have emanated as an indispensable assets that play a critical role in revolutionizing the field of data communication. Owing to the evolution of communication standards, research trends in IoT based wireless sensor networks have been rapidly progressing toward achieving effective data routing with a prolonged network lifetime and minimized energy consumption. In this article, an optimized ticket manager based energy‐aware multipath routing protocol (TMERP) is proposed. The proposed protocol design comprises three important functional entities: ticket manager (TM), routing planner (RP), and backup node (BN). The TM is responsible for controlling and monitoring all the constraints related to networking. Then, the RP minimizes the overall complexity of the optimal resource allocation by avoiding an end‐to‐end delay. Finally, the BN facilitates efficient data routing through the optimal selection of routing paths using the node trust evaluation and backup process to minimize data loss. Hence, the proposed multipath routing system has a distinct advantage in enhancing the network lifetime constraint with minimal energy consumption owing to the collective performance of its functional entities. The simulation results of the experimental studies show that the proposed protocol design achieved an improved performance in terms of network energy, throughput, and network operational lifetime by 39.3%, 47.9%, and 10.5%, respectively when compared with similar existing protocols.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84137124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of break‐through sensing technology, performing data capturing and analysis for knowledge engineering has become more opportunistic. The task of efficiently analyzing sensor based data for effective decision making poses a significant challenge. Conventional prediction and recommender systems lack comprehensive analysis of all parameters and aspects, thus compromising prediction results. At the decision‐making level, traditional knowledge driven prediction systems deploy classical ontology for knowledge representation and analysis. However, classical ontologies are not considered as powerful tools due to their inability to handle vagueness in data for real‐world applications. On the contrary, fuzzy ontology deals with the issue of hazy and uncertain data for effective analysis to give promising results. This work presents interval type 2 fuzzy ontological knowledge model that predicts water quality of sensor based water samples and providing solutions with respect to the corresponding quality state. The proposed knowledge model constitutes of two newly developed ontologies: water sensor observations ontology (crisp ontology to model sensor observational data) and water quality ontology (interval type 2 fuzzy ontology for modeling the water quality prediction process). The inference mechanism is based on interval type‐2 fuzzy partitioning and computation. Besides water quality prediction and providing solutions, the proposed model handles the issue of interoperability and exchange of consensual knowledge among multiple disciplines. The proposed knowledge model is validated with real‐life water sensor based parameterized data captured from various geographically dispersed monitoring stations with approximately 50,000 samples at each station.
{"title":"An interval type‐2 fuzzy ontological model: Predicting water quality from sensory data","authors":"Diksha Hooda, Rinkle Rani","doi":"10.1002/cpe.7377","DOIUrl":"https://doi.org/10.1002/cpe.7377","url":null,"abstract":"With the advent of break‐through sensing technology, performing data capturing and analysis for knowledge engineering has become more opportunistic. The task of efficiently analyzing sensor based data for effective decision making poses a significant challenge. Conventional prediction and recommender systems lack comprehensive analysis of all parameters and aspects, thus compromising prediction results. At the decision‐making level, traditional knowledge driven prediction systems deploy classical ontology for knowledge representation and analysis. However, classical ontologies are not considered as powerful tools due to their inability to handle vagueness in data for real‐world applications. On the contrary, fuzzy ontology deals with the issue of hazy and uncertain data for effective analysis to give promising results. This work presents interval type 2 fuzzy ontological knowledge model that predicts water quality of sensor based water samples and providing solutions with respect to the corresponding quality state. The proposed knowledge model constitutes of two newly developed ontologies: water sensor observations ontology (crisp ontology to model sensor observational data) and water quality ontology (interval type 2 fuzzy ontology for modeling the water quality prediction process). The inference mechanism is based on interval type‐2 fuzzy partitioning and computation. Besides water quality prediction and providing solutions, the proposed model handles the issue of interoperability and exchange of consensual knowledge among multiple disciplines. The proposed knowledge model is validated with real‐life water sensor based parameterized data captured from various geographically dispersed monitoring stations with approximately 50,000 samples at each station.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85355655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is one of the primary causes of death in females worldwide. So, recognizing and categorizing breast cancer in the initial stage is necessary for helping the patients to have suitable action. In this research, a novel spider monkey‐based convolution model (SMCM) is developed for detecting breast cancer cells in an early stage. Here, breast magnetic resonance imaging (MRI) is utilized as the dataset trained to the system. Moreover, the developed SMCM function is processed on the breast MRI dataset to primarily detect and segment the affected part of breast cancer. Additionally, segmented images are utilized for tracking in the dataset to identify the possibility of breast cancer. Moreover, the simulation of this approach is done by Python tool, and the parameters of the current research work are evaluated with prevailing works. Hence, the outcomes show that the current research model has improved accuracy by 1.5% compared to existing models.
{"title":"Enhanced deep learning frame model for an accurate segmentation of cancer affected part in breast","authors":"Kranti Kumar Dewangan, S. Sahu, R. Janghel","doi":"10.1002/cpe.7379","DOIUrl":"https://doi.org/10.1002/cpe.7379","url":null,"abstract":"Breast cancer is one of the primary causes of death in females worldwide. So, recognizing and categorizing breast cancer in the initial stage is necessary for helping the patients to have suitable action. In this research, a novel spider monkey‐based convolution model (SMCM) is developed for detecting breast cancer cells in an early stage. Here, breast magnetic resonance imaging (MRI) is utilized as the dataset trained to the system. Moreover, the developed SMCM function is processed on the breast MRI dataset to primarily detect and segment the affected part of breast cancer. Additionally, segmented images are utilized for tracking in the dataset to identify the possibility of breast cancer. Moreover, the simulation of this approach is done by Python tool, and the parameters of the current research work are evaluated with prevailing works. Hence, the outcomes show that the current research model has improved accuracy by 1.5% compared to existing models.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73284608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) concept increases the spectrum demands of mobile users in wireless communications because of the intensive and heterogeneous structure of IoT. Various devices are joining IoT networks every day, and spectrum scarcity may be a crucial issue for IoT environments in the near future. Cognitive radio (CR) is capable of sensing and detecting spectrum holes. With the aim of CR, more powerful IoT devices will be constructed in such crowded wireless environments. Also, dynamic and ad‐hoc CR networks have not a fixed base station. Therefore, CR capable IoT (CR‐based IoT) device approach with routing capabilities will be a solution for future IoT environments. In this study, spectrum aware Ad hoc on‐demand distance vector routing protocol is proposed for CR‐based IoT devices in IoT environments. For the performance analysis of the proposed method, various network scenarios with different idle probability have been performed and throughput and delay results for different offered loads have been analyzed.
{"title":"An effective routing algorithm for spectrum allocations in cognitive radio based internet of things","authors":"Murtaza Cicioğlu, A. Çalhan, Md. Sipon Miah","doi":"10.1002/cpe.7368","DOIUrl":"https://doi.org/10.1002/cpe.7368","url":null,"abstract":"The Internet of Things (IoT) concept increases the spectrum demands of mobile users in wireless communications because of the intensive and heterogeneous structure of IoT. Various devices are joining IoT networks every day, and spectrum scarcity may be a crucial issue for IoT environments in the near future. Cognitive radio (CR) is capable of sensing and detecting spectrum holes. With the aim of CR, more powerful IoT devices will be constructed in such crowded wireless environments. Also, dynamic and ad‐hoc CR networks have not a fixed base station. Therefore, CR capable IoT (CR‐based IoT) device approach with routing capabilities will be a solution for future IoT environments. In this study, spectrum aware Ad hoc on‐demand distance vector routing protocol is proposed for CR‐based IoT devices in IoT environments. For the performance analysis of the proposed method, various network scenarios with different idle probability have been performed and throughput and delay results for different offered loads have been analyzed.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80569615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital image sharing and utilization have increasing in a speedy manner at present time. Therefore, copyright/ownership protection for digital images has been an essential requirement in the modern world of digital advancements. This work offers an image watermarking scheme to provide ownership/copyright verification in an effective manner with no conciliation with imperceptibility. The image is first partitioned into small size blocks. During embedding, a multilevel transform domain‐based framework is employed to embed the watermark information into blocks. Additionally, the block selection process is made randomize (key‐based) to offer high security against illegal manipulations/access. Before embedding, the watermark is encrypted using the Arnold transform‐based approach for additional security. The scheme has blind nature, high imperceptibility and it is robust enough to endure a different variety of processing attacks. Experimental results on different images illustrate that the proposed watermarking approach has high imperceptibility, high robustness, decent embedding capacity, and significant security features. The relative comparison with the existing robust watermarking schemes (having the same payload) shows the superiority of the proposed work over existing methods presented in the recent past.
{"title":"A multiple transform based approach for robust and blind image copyright protection","authors":"Rishi Sinhal, I. Ansari","doi":"10.1002/cpe.7362","DOIUrl":"https://doi.org/10.1002/cpe.7362","url":null,"abstract":"Digital image sharing and utilization have increasing in a speedy manner at present time. Therefore, copyright/ownership protection for digital images has been an essential requirement in the modern world of digital advancements. This work offers an image watermarking scheme to provide ownership/copyright verification in an effective manner with no conciliation with imperceptibility. The image is first partitioned into small size blocks. During embedding, a multilevel transform domain‐based framework is employed to embed the watermark information into blocks. Additionally, the block selection process is made randomize (key‐based) to offer high security against illegal manipulations/access. Before embedding, the watermark is encrypted using the Arnold transform‐based approach for additional security. The scheme has blind nature, high imperceptibility and it is robust enough to endure a different variety of processing attacks. Experimental results on different images illustrate that the proposed watermarking approach has high imperceptibility, high robustness, decent embedding capacity, and significant security features. The relative comparison with the existing robust watermarking schemes (having the same payload) shows the superiority of the proposed work over existing methods presented in the recent past.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86463076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we characterize the mixed Cp$$ {C}_p $$ ( CMCp$$ {mathrm{CMC}}_p $$ ) and conditional stochastic restricted ridge Cp$$ {C}_p $$ ( CSRRCp$$ {mathrm{CSRRC}}_p $$ ) statistics that depend on the expected conditional Gauss discrepancy for the purpose of selecting the most appropriate model when stochastic restrictions are appeared in linear mixed models. Under the known and unknown variance components assumptions, we define two shapes of CMCp$$ {mathrm{CMC}}_p $$ and CSRRCp$$ {mathrm{CSRRC}}_p $$ statistics. Then, the article is concluded with both a Monte Carlo simulation study and a real data analysis, supporting the findings of the theoretical results on the CMCp$$ {mathrm{CMC}}_p $$ and CSRRCp$$ {mathrm{CSRRC}}_p $$ statistics.
{"title":"Model selection via conditional conceptual predictive statistic for mixed and stochastic restricted ridge estimators in linear mixed models","authors":"M. Özkale, Özge Kuran","doi":"10.1002/cpe.7366","DOIUrl":"https://doi.org/10.1002/cpe.7366","url":null,"abstract":"In this article, we characterize the mixed Cp$$ {C}_p $$ ( CMCp$$ {mathrm{CMC}}_p $$ ) and conditional stochastic restricted ridge Cp$$ {C}_p $$ ( CSRRCp$$ {mathrm{CSRRC}}_p $$ ) statistics that depend on the expected conditional Gauss discrepancy for the purpose of selecting the most appropriate model when stochastic restrictions are appeared in linear mixed models. Under the known and unknown variance components assumptions, we define two shapes of CMCp$$ {mathrm{CMC}}_p $$ and CSRRCp$$ {mathrm{CSRRC}}_p $$ statistics. Then, the article is concluded with both a Monte Carlo simulation study and a real data analysis, supporting the findings of the theoretical results on the CMCp$$ {mathrm{CMC}}_p $$ and CSRRCp$$ {mathrm{CSRRC}}_p $$ statistics.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77707054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In general, diabetic retinopathy (DR) is a common ocular disease that causes damage to the retina due to blood leakage from the vessels. Earlier detection of DR becomes a complicated task and it is necessary to prevent complete blindness. Various physical examinations are employed in DR detection but manual diagnosis results in misclassification results. Therefore, this article proposes a novel technique to predict and classify the DR disease effectively. The significant objective of the proposed approach involves the effective classification of fundus retinal images into two namely, normal (absence of DR) and abnormal (presence of DR). The proposed DR detection utilizes three vital phases namely, the data preprocessing, image augmentation, feature extraction, and classification. Initially, the image preprocessing is done to remove unwanted noises and to enhance images. Then, the preprocessed image is augmented to enhance the size and quality of the training images. This article proposes a novel modified Gaussian convolutional deep belief network based dwarf mongoose optimization algorithm for effective extraction and classification of retinal images. In this article, an ODIR‐2019 dataset is employed in detecting and classifying DR disease. Finally, the experimentation is carried out and the proposed approach achieved 97% of accuracy. This implies that our proposed approach effectively classifies the fundus retinal images.
{"title":"An efficient early detection of diabetic retinopathy using dwarf mongoose optimization based deep belief network","authors":"A. Abirami, R. Kavitha","doi":"10.1002/cpe.7364","DOIUrl":"https://doi.org/10.1002/cpe.7364","url":null,"abstract":"In general, diabetic retinopathy (DR) is a common ocular disease that causes damage to the retina due to blood leakage from the vessels. Earlier detection of DR becomes a complicated task and it is necessary to prevent complete blindness. Various physical examinations are employed in DR detection but manual diagnosis results in misclassification results. Therefore, this article proposes a novel technique to predict and classify the DR disease effectively. The significant objective of the proposed approach involves the effective classification of fundus retinal images into two namely, normal (absence of DR) and abnormal (presence of DR). The proposed DR detection utilizes three vital phases namely, the data preprocessing, image augmentation, feature extraction, and classification. Initially, the image preprocessing is done to remove unwanted noises and to enhance images. Then, the preprocessed image is augmented to enhance the size and quality of the training images. This article proposes a novel modified Gaussian convolutional deep belief network based dwarf mongoose optimization algorithm for effective extraction and classification of retinal images. In this article, an ODIR‐2019 dataset is employed in detecting and classifying DR disease. Finally, the experimentation is carried out and the proposed approach achieved 97% of accuracy. This implies that our proposed approach effectively classifies the fundus retinal images.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82443081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}