Fault detection, classification, and location prediction are crucial for maintaining the stability and reliability of modern power systems, reducing economic losses, and enhancing system protection sensitivity. This paper presents a novel Hierarchical Deep Learning Approach (HDLA) for accurate and efficient fault diagnosis in transmission lines. HDLA leverages two-stage transformer-based classification and regression models to perform Fault Detection (FD), Fault Type Classification (FTC), and Fault Location Prediction (FLP) directly from synchronized raw three-phase current and voltage samples. By bypassing the need for feature extraction, HDLA significantly reduces computational complexity while achieving superior performance compared to existing deep learning methods. The efficacy of HDLA is validated on a comprehensive dataset encompassing various fault scenarios with diverse types, locations, resistances, inception angles, and noise levels. The results demonstrate significant improvements in accuracy, recall, precision, and F1-score metrics for classification, and Mean Absolute Errors (MAEs) and Root Mean Square Errors (RMSEs) for prediction, showcasing the effectiveness of HDLA for real-time fault diagnosis in power systems.
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
Due to the massive growth in Internet of Things (IoT) devices, it is necessary to properly identify, authorize, and protect against attacks the devices connected to the particular network. In this manuscript, IoT Device Type Identification based on Variational Auto Encoder Wasserstein Generative Adversarial Network optimized with Pelican Optimization Algorithm (IoT-DTI-VAWGAN-POA) is proposed for Prolonging IoT Security. The proposed technique comprises three phases, such as data collection, feature extraction, and IoT device type detection. Initially, real network traffic dataset is gathered by distinct IoT device types, like baby monitor, security camera, etc. For feature extraction phase, the network traffic feature vector comprises packet sizes, Mean, Variance, Kurtosis derived by Adaptive and concise empirical wavelet transforms. Then, the extracting features are supplied to VAWGAN is used to identify the IoT devices as known or unknown. Then Pelican Optimization Algorithm (POA) is considered to optimize the weight factors of VAWGAN for better IoT device type identification. The proposed IoT-DTI-VAWGAN-POA method is implemented in Python and proficiency is examined under the performance metrics, like accuracy, precision, f-measure, sensitivity, Error rate, computational complexity, and RoC. It provides 33.41%, 32.01%, and 31.65% higher accuracy, and 44.78%, 43.24%, and 48.98% lower error rate compared to the existing methods.
This research introduces a revolutionary machinet learning algorithm-based quality estimation and grading system. The suggested work is divided into four main parts: Ppre-processing, neutroscopic model transformation, Feature Extraction, and Grading. The raw images are first pre-processed by following five major stages: read, resize, noise removal, contrast enhancement via CLAHE, and Smoothing via filtering. The pre-processed images are then converted into a neutrosophic domain for more effective mango grading. The image is processed under a new Geometric Mean based neutrosophic approach to transforming it into the neutrosophic domain. Finally, the prediction of TSS for the different chilling conditions is done by Improved Deep Belief Network (IDBN) and based on this; the grading of mango is done automatically as the model is already trained with it. Here, the prediction of TSS is carried out under the consideration of SSC, firmness, and TAC. A comparison between the proposed and traditional methods is carried out to confirm the efficacy of various metrics.
Cardiovascular diseases (CVD) represent a significant global health challenge, often remaining undetected until severe cardiac events, such as heart attacks or strokes, occur. In regions like Qatar, research focused on non-invasive CVD identification methods, such as retinal imaging and dual-energy X-ray absorptiometry (DXA), is limited. This study presents a groundbreaking system known as Multi-Modal Artificial Intelligence for Cardiovascular Disease (M2AI-CVD), designed to provide highly accurate predictions of CVD. The M2AI-CVD framework employs a four-fold methodology: First, it rigorously evaluates image quality and processes lower-quality images for further analysis. Subsequently, it uses the Entropy-based Fuzzy C Means (EnFCM) algorithm for precise image segmentation. The Multi-Modal Boltzmann Machine (MMBM) is then employed to extract relevant features from various data modalities, while the Genetic Algorithm (GA) selects the most informative features. Finally, a ZFNet Convolutional Neural Network (ZFNetCNN) classifies images, effectively distinguishing between CVD and Non-CVD cases. The research's culmination, tested across five distinct datasets, yields outstanding results, with an accuracy of 95.89%, sensitivity of 96.89%, and specificity of 98.7%. This multi-modal AI approach offers a promising solution for the accurate and early detection of cardiovascular diseases, significantly improving the prospects of timely intervention and improved patient outcomes in the realm of cardiovascular health.
This paper presents a non-parametric identification scheme for a class of uncertain switched nonlinear systems based on continuous-time neural networks. This scheme is based on a continuous neural network identifier. This adaptive identifier guaranteed the convergence of the identification errors to a small vicinity of the origin. The convergence of the identification error was determined by the Lyapunov theory supported by a practical stability variation for switched systems. The same stability analysis generated the learning laws that adjust the identifier structure. The upper bound of the convergence region was characterized in terms of uncertainties and noises affecting the switched system. A second finite-time convergence learning law was also developed to describe an alternative way of forcing the identification error's stability. The study presented in this paper described a formal technique for analysing the application of adaptive identifiers based on continuous neural networks for uncertain switched systems. The identifier was tested for two basic problems: a simple mechanical system and a switched representation of the human gait model. In both cases, accurate results for the identification problem were achieved.