The Internet of Medical Things (IoMT) is an extended genre of the Internet of Things (IoT) where the Things collaborate to provide remote patient health monitoring, also known as the Internet of Health (IoH). Smartphones and IoMTs are expected to maintain secure and trusted confidential patient record exchange while managing the patient remotely. Healthcare organizations deploy Healthcare Smartphone Networks (HSN) for personal patient data collection and sharing among smartphone users and IoMT nodes. However, attackers gain access to confidential patient data via infected IoMT nodes on the HSN. Additionally, attackers can compromise the entire network via malicious nodes. This article proposes a Hyperledger blockchain-based technique to identify compromised IoMT nodes and safeguard sensitive patient records. Furthermore, the paper presents a Clustered Hierarchical Trust Management System (CHTMS) to block malicious nodes. In addition, the proposal employs Elliptic Curve Cryptography (ECC) to protect sensitive health records and is resilient against Denial-Of-Service (DOS) attacks. Finally, the evaluation results show that integrating blockchains into the HSN system improved detection performance compared to the existing state of the art. Therefore, the simulation results indicate better security and reliability when compared to conventional databases.
The number of research articles published on COVID-19 has dramatically increased since the outbreak of the pandemic in November 2019. This absurd rate of productivity in research articles leads to information overload. It has increasingly become urgent for researchers and medical associations to stay up to date on the latest COVID-19 studies. To address information overload in COVID-19 scientific literature, the study presents a novel hybrid model named CovSumm, an unsupervised graph-based hybrid approach for single-document summarization, that is evaluated on the CORD-19 dataset. We have tested the proposed methodology on the scientific papers in the database dated from January 1, 2021 to December 31, 2021, consisting of 840 documents in total. The proposed text summarization is a hybrid of two distinctive extractive approaches (1) GenCompareSum (transformer-based approach) and (2) TextRank (graph-based approach). The sum of scores generated by both methods is used to rank the sentences for generating the summary. On the CORD-19, the recall-oriented understudy for gisting evaluation (ROUGE) score metric is used to compare the performance of the CovSumm model with various state-of-the-art techniques. The proposed method achieved the highest scores of ROUGE-1: 40.14%, ROUGE-2: 13.25%, and ROUGE-L: 36.32%. The proposed hybrid approach shows improved performance on the CORD-19 dataset when compared to existing unsupervised text summarization methods.
Remarkable advancements have been achieved in machine learning and computer vision through the utilization of deep neural networks. Among the most advantageous of these networks is the convolutional neural network (CNN). It has been used in pattern recognition, medical diagnosis, and signal processing, among other things. Actually, for these networks, the challenge of choosing hyperparameters is of utmost importance. The reason behind this is that as the number of layers rises, the search space grows exponentially. In addition, all known classical and evolutionary pruning algorithms require a trained or built architecture as input. During the design phase, none of them consider the process of pruning. In order to assess the effectiveness and efficiency of any architecture created, pruning of channels must be carried out before transmitting the dataset and computing classification errors. For instance, following pruning, an architecture of medium quality in terms of classification may transform into an architecture that is both highly light and accurate, and vice versa. There exist countless potential scenarios that could occur, which prompted us to develop a bi-level optimization approach for the entire process. The upper level involves generating the architecture while the lower level optimizes channel pruning. Evolutionary algorithms (EAs) have proven effective in bi-level optimization, leading us to adopt the co-evolutionary migration-based algorithm as a search engine for our bi-level architectural optimization problem in this research. Our proposed method, CNN-D-P (bi-level CNN design and pruning), was tested on the widely used image classification benchmark datasets, CIFAR-10, CIFAR-100 and ImageNet. Our suggested technique is validated by means of a set of comparison tests with regard to relevant state-of-the-art architectures.
The healthcare industry is rapidly automating, in large part because of the Internet of Things (IoT). The sector of the IoT devoted to medical research is sometimes called the Internet of Medical Things (IoMT). Data collecting and processing are the fundamental components of all IoMT applications. Machine learning (ML) algorithms must be included into IoMT immediately due to the vast quantity of data involved in healthcare and the value that precise forecasts have. In today's world, together, IoMT, cloud services, and ML techniques have become effective tools for solving many problems in the healthcare sector, such as epileptic seizure monitoring and detection. One of the biggest hazards to people's lives is epilepsy, a lethal neurological condition that has become a global issue. To prevent the deaths of thousands of epileptic patients each year, there is a critical necessity for an effective method for detecting epileptic seizures at their earliest stage. Numerous medical procedures, including epileptic monitoring, diagnosis, and other procedures, may be carried out remotely with the use of IoMT, which will reduce healthcare expenses and improve services. This article seeks to act as both a collection and a review of the different cutting-edge ML applications for epilepsy detection that are presently being combined with IoMT.
Due to its flexibility, cost-effectiveness, and quick deployment abilities, unmanned aerial vehicle-mounted base station (UmBS) deployment is a promising approach for restoring wireless services in areas devastated by natural disasters such as floods, thunderstorms, and tsunami strikes. However, the biggest challenges in the deployment process of UmBS are ground user equipment's (UE's) position information, UmBS transmit power optimization, and UE-UmBS association. In this article, we propose Localization of ground UEs and their Association with the UmBS (LUAU), an approach that ensures localization of ground UEs and energy-efficient deployment of UmBSs. Unlike existing studies that proposed their work based on the known UEs positional information, we first propose a three-dimensional range-based localization approach (3D-RBL) to estimate the position information of the ground UEs. Subsequently, an optimization problem is formulated to maximize the UE's mean data rate by optimizing the UmBS transmit power and deployment locations while taking the interference from the surrounding UmBSs into consideration. To achieve the goal of the optimization problem, we utilize the exploration and exploitation abilities of the Q-learning framework. Simulation results demonstrate that the proposed approach outperforms two benchmark schemes in terms of the UE's mean data rate and outage percentage.
The recent emergence of monkeypox poses a life-threatening challenge to humans and has become one of the global health concerns after COVID-19. Currently, machine learning-based smart healthcare monitoring systems have demonstrated significant potential in image-based diagnosis including brain tumor identification and lung cancer diagnosis. In a similar fashion, the applications of machine learning can be utilized for the early identification of monkeypox cases. However, sharing critical health information with various actors such as patients, doctors, and other healthcare professionals in a secure manner remains a research challenge. Motivated by this fact, our paper presents a blockchain-enabled conceptual framework for the early detection and classification of monkeypox using transfer learning. The proposed framework is experimentally demonstrated in Python 3.9 using a monkeypox dataset of 1905 images obtained from the GitHub repository. To validate the effectiveness of the proposed model, various performance estimators, namely accuracy, recall, precision, and F1-score, are employed. The performance of different transfer learning models, namely Xception, VGG19, and VGG16, is compared against the presented methodology. Based on the comparison, it is evident that the proposed methodology effectively detects and classifies the monkeypox disease with a classification accuracy of 98.80%. In future, multiple skin diseases such as measles and chickenpox can be diagnosed using the proposed model on the skin lesion datasets.
As a popular platform-independent language, Java is widely used in enterprise applications. In the past few years, language vulnerabilities exploited by Java malware have become increasingly prevalent, which cause threats for multi-platform. Security researchers continuously propose various approaches for fighting against Java malware programs. The low code path coverage and poor execution efficiency of dynamic analysis limit the large-scale application of dynamic Java malware detection methods. Therefore, researchers turn to extracting abundant static features to implement efficient malware detection. In this paper, we explore the direction of capturing malware semantic information by using graph learning algorithms and present BejaGNN (Behavior-based Java malware detection via Graph Neural Network), a novel behavior-based Java malware detection method using static analysis, word embedding technique, and graph neural network. Specifically, BejaGNN leverages static analysis techniques to extract ICFGs (Inter-procedural Control Flow Graph) from Java program files and then prunes these ICFGs to remove noisy instructions. Then, word embedding techniques are adopted to learn semantic representations for Java bytecode instructions. Finally, BejaGNN builds a graph neural network classifier to determine the maliciousness of Java programs. Experimental results on a public Java bytecode benchmark demonstrate that BejaGNN achieves high F1 98.8% and is superior to existing Java malware detection approaches, which verifies the promise of graph neural network in Java malware detection.
A multi-client functional encryption () scheme [Goldwasser-Gordon-Goyal 2014] for set intersection is a cryptographic primitive that enables an evaluator to learn the intersection from all sets of a predetermined number of clients, without need to learn the plaintext set of each individual client. Using these schemes, it is impossible to compute the set intersections from arbitrary subsets of clients, and thus, this constraint limits the range of its applications. To provide such a possibility, we redefine the syntax and security notions of schemes, and introduce flexible multi-client functional encryption () schemes. We extend the security of schemes to security of schemes in a straightforward way. For a universal set with polynomial size in security parameter, we propose an construction for achieving security. Our construction computes set intersection for n clients that each holds a set with m elements, in time . We also prove the security of our construction under DDH1 that it is a variant of the symmetric external Diffie-Hellman (SXDH) assumption.
The constant growth of social media, unconventional web technologies, mobile applications, and Internet of Things (IoT) devices create challenges for cloud data systems in order to support huge datasets and very high request rates. NoSQL databases, such as Cassandra and HBase, and relational SQL databases with replication, such as Citus/PostgreSQL, have been used to increase horizontal scalability and high availability of data store systems. In this paper, we evaluated three distributed databases on a low-power low-cost cluster of commodity Single-Board Computers (SBC): relational Citus/PostgreSQL and NoSQL databases Cassandra and HBase. The cluster has 15 Raspberry Pi 3 nodes with Docker Swarm orchestration tool for service deployment and ingress load balancing over SBCs. We believe that a low-cost SBC cluster can support cloud serving goals such as scale-out, elasticity, and high availability. Experimental results clearly demonstrated that there is a trade-off between performance and replication, which provides availability and partition tolerance. Besides, both properties are essential in the context of distributed systems with low-power boards. Cassandra attained better results with its consistency levels specified by the client. Both Citus and HBase enable consistency but it penalizes performance as the number of replicas increases.