With the current COVID-19 pandemic, sophisticated epidemiological surveillance systems are more important than ever because conventional approaches have not been able to handle the scope and complexity of this global emergency. In response to this challenge, the authors present the state-of-the-art SEIR-Driven Semantic Integration Framework (SDSIF), which leverages the Internet of Things (IoT) to handle a variety of data sources. The primary innovation of SDSIF is the development of an extensive COVID-19 ontology, which makes unmatched data interoperability and semantic inference possible. The framework facilitates not only real-time data integration but also advanced analytics, anomaly detection, and predictive modelling through the use of Recurrent Neural Networks (RNNs). By being scalable and flexible enough to fit into different healthcare environments and geographical areas, SDSIF is revolutionising epidemiological surveillance for COVID-19 outbreak management. Metrics such as Mean Absolute Error (MAE) and Mean sqḋ Error (MSE) are used in a rigorous evaluation. The evaluation also includes an exceptional R-squared score, which attests to the effectiveness and ingenuity of SDSIF. Notably, a modest RMSE value of 8.70 highlights its accuracy, while a low MSE of 3.03 highlights its high predictive precision. The framework's remarkable R-squared score of 0.99 emphasises its resilience in explaining variations in disease data even more.
Alzheimer’s disease (AD) is a neurodegenerative disorder that mostly affects old aged people. Its symptoms are initially mild, but they get worse over time. Although this health disease has no cure, its early diagnosis can help to reduce its impacts. A methodology SMOTE-RF is proposed for AD prediction. Alzheimer's is predicted using machine learning algorithms. Performances of three algorithms decision tree, extreme gradient boosting (XGB), and random forest (RF) are evaluated in prediction. Open Access Series of Imaging Studies longitudinal dataset available on Kaggle is used for experiments. The dataset is balanced using synthetic minority oversampling technique. Experiments are done on both imbalanced and balanced datasets. Decision tree obtained 73.38% accuracy, XGB obtained 83.88% accuracy and RF obtained a maximum of 87.84% accuracy on the imbalanced dataset. Decision tree obtained 83.15% accuracy, XGB obtained 91.05% accuracy and RF obtained maximum 95.03% accuracy on the balanced dataset. A maximum accuracy of 95.03% is achieved with SMOTE-RF.
The Internet of Things (IoT) is revolutionizing the healthcare industry by enhancing personalized patient care. However, the transmission of sensitive health data in IoT systems presents significant security and privacy challenges, further exacerbated by the difficulty of exploiting traditional protection means due to poor battery equipment and limited storage and computational capabilities of IoT devices. The authors analyze techniques applied in the medical context to encrypt sensible data and deal with the unique challenges of resource-constrained devices. A technique that is facing increasing interest is the Physical Unclonable Function (PUF), where biometrics are implemented on integrated circuits' electric features. PUFs, however, demand special hardware, so in this work, instead of considering the physical device as a source of randomness, an ElectroCardioGram (ECG) can be taken into consideration to make a ‘virtual’ PUF. Such an mechanism leverages individual ECG signals to generate a cryptographic key for encrypting and decrypting data. Due to the poor stability of the ECG signal and the typical noise existing in the measurement process for such a signal, filtering and feature extraction techniques must be adopted. The proposed model considers the adoption of pre-processing techniques in conjunction with a fuzzy extractor to add stability to the signal. Experiments were performed on a dataset containing ECG records gathered over 6 months, yielding good results in the short term and valuable outcomes in the long term, paving the way for adaptive PUF techniques in this context.
Energy storage is playing an increasingly important role in the modern world as sustainability is becoming a critical issue. Within this domain, rechargeable battery is gaining significant popularity as it has been adopted to serve as the power supplier in a broad range of application scenarios, such as cyber-physical system (CPS), due to multiple advantages. On the other hand, battery inspection and management solutions have been constructed based on the CPS architecture in order to guarantee the quality, reliability and safety of rechargeable batteries. In specific, lifetime prediction is extensively studied in recent research as it can help assess the quality and health status to facilitate the manufacturing and maintenance. Due to the aforementioned importance, the authors aim to conduct a comprehensive survey on the data-driven techniques for battery lifetime prediction, including their current status, challenges and promises. In contrast to existing literature, the battery lifetime prediction methods are studied under CPS context in this survey. Hence, the authors focus on the algorithms for lifetime prediction as well as the engineering frameworks that enable the data acquisition and deployment of prediction models in CPS systems. Through this survey, the authors intend to investigate both academic and practical values in the domain of battery lifetime prediction to benefit both researchers and practitioners.
Supervisory control and data acquisition systems are critical in Industry 4.0 for controlling and monitoring industrial processes. However, these systems are vulnerable to various attacks, and therefore, intelligent and robust intrusion detection systems as security tools are necessary for ensuring security. Machine learning-based intrusion detection systems require datasets with balanced class distribution, but in practice, imbalanced class distribution is unavoidable. A dataset created by running a supervisory control and data acquisition IEC 60870-5-104 (IEC 104) protocol on a testbed network is presented. The dataset includes normal and attacks traffic data such as port scan, brute force, and Denial of service attacks. Various types of Denial of service attacks are generated to create a robust and specific dataset for training the intrusion detection system model. Three popular techniques for handling class imbalance, that is, random over-sampling, random under-sampling, and synthetic minority oversampling, are implemented to select the best dataset for the experiment. Gradient boosting, decision tree, and random forest algorithms are used as classifiers for the intrusion detection system models. Experimental results indicate that the intrusion detection system model using decision tree and random forest classifiers using random under-sampling achieved the highest accuracy of 99.05%. The intrusion detection system model's performance is verified using various metrics such as recall, precision, F1-Score, receiver operating characteristics curves, and area under the curve. Additionally, 10-fold cross-validation shows no indication of overfitting in the created intrusion detection system model.
Testing the visual field is a valuable diagnostic tool for identifying eye conditions such as cataract, glaucoma, and retinal disease. Its quick and straightforward testing process has become an essential component in our efforts to prevent blindness. Still, the device must be accessible to the general masses. This research has developed a machine learning model that can work with Edge devices like smartphones. As a result, it is opening the opportunity to integrate the disease-detecting model into multiple Edge devices to automate their operation. The authors intend to use convolutional neural network (CNN) and deep learning to deduce which optimisers have the best results when detecting cataracts from live photos of eyes. This is done by comparing different models and optimisers. Using these methods, a reliable model can be obtained that detects cataracts. The proposed TensorFlow Lite model constructed by combining CNN layers and Adam in this study is called Optimised Light Weight Sequential Deep Learning Model (SDLM). SDLM is trained using a smaller number of CNN layers and parameters, which gives SDLM its compatibility, fast execution time, and low memory requirements. The proposed Android app, I-Scan, uses SDLM in the form of TensorFlow Lite for demonstration of the model in Edge devices.
Given the short time intervals in which short-circuit faults occur in a power system, a certain time delay between the moment of a fault's inception in the system to the moment in which the fault is actually detected is introduced. In this small time margin, the high amplitudes of the fault current can deal significant damage to the power system. A technique to characterise different types of short circuit faults in a power system for real-time detection, namely AB, BC, CA, ABC, AG, BG and CG faults (and normal operation), is presented based on the geometry of the curve generated by the Clarke transform of the three-phase voltages of the power system. The process was conducted in real time using the HIL402 system and a Raspberry Pi 3, and all programming done in the Python programming language. It was concluded that the tested types of faults can be accurately characterised using the eigenvalues and eigenvectors of the matrix that characterises an ellipse associated with each fault: eigenvalues can be used to determine the fault inception distance and eigenvectors can be used to determine the type of fault that occurred. Next, the design of a machine learning model was done based on the previously mentioned characterisation technique. The model was embedded into a Raspberry Pi 3, thus enabling fault detection and classification in a base power system in real time. Finally, the accuracy of the model was tested under different measurement conditions, yielding satisfactory results for a selected set of conditions and overcoming the shortcomings presented in the current research, which do not perform detection and classification in real time.
A predictive queue management method is proposed for constrained congestion control in internet routers in the face of communication delays. The proposed method uses the queue and router models, input traffic rate, and queue length to precisely characterise the entire process. The model that has been built is then used to construct an optimal constrained active queue management (CAQM) strategy using the model predictive control method. Important factors, such as link capacity, Transmission Control Protocol (TCP) sessions, round-trip time, and a few others, have been selected and used to linearise the interconnection of TCP. Then, an efficient MPC-based structure to manage the CAQM in the face of unknown disturbances is designed. Simulations are used to validate the proposed method's effectiveness and robustness.