Some of the computer vision applications such as understanding, recognition as well as image processing are some areas where AI techniques like convolutional neural network (CNN) have attained great success. AI techniques are not very frequently used in applications like image compression which are a part of low-level vision applications. Intensifying the visual quality of the lossy video/image compression has been a huge obstacle for a very long time. Image processing tasks and image recognition can be addressed with the application of deep learning CNNs as a result of the availability of large training datasets and the recent advances in computing power. This paper consists of a CNN-based novel compression framework comprising of Compact CNN (ComCNN) and Reconstruction CNN (RecCNN) where they are trained concurrently and ideally consolidated into a compression framework, along with MS-ROI (Multi Structure-Region of Interest) mapping which highlights the semiotically notable portions of the image. The framework attains a mean PSNR value of 32.9dB, achieving a gain of 3.52dB and attains mean SSIM value of 0.9262, achieving a gain of 0.0723dB over the other methods when compared using the 6 main test images. Experimental results in the proposed study validate that the architecture substantially surpasses image compression frameworks, that utilized deblocking or denoising post- processing techniques, classified utilizing Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measures (SSIM) with a mean PSNR, SSIM and Compression Ratio of 38.45, 0.9602 and 1.75x respectively for the 50 test images, thus obtaining state-of-art performance for Quality Factor (QF)=5.
{"title":"JPEG2000-Based Semantic Image Compression using CNN","authors":"Anish Nagarsenker, P. Khandekar, Minal Deshmukh","doi":"10.32985/ijeces.14.5.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.4","url":null,"abstract":"Some of the computer vision applications such as understanding, recognition as well as image processing are some areas where AI techniques like convolutional neural network (CNN) have attained great success. AI techniques are not very frequently used in applications like image compression which are a part of low-level vision applications. Intensifying the visual quality of the lossy video/image compression has been a huge obstacle for a very long time. Image processing tasks and image recognition can be addressed with the application of deep learning CNNs as a result of the availability of large training datasets and the recent advances in computing power. This paper consists of a CNN-based novel compression framework comprising of Compact CNN (ComCNN) and Reconstruction CNN (RecCNN) where they are trained concurrently and ideally consolidated into a compression framework, along with MS-ROI (Multi Structure-Region of Interest) mapping which highlights the semiotically notable portions of the image. The framework attains a mean PSNR value of 32.9dB, achieving a gain of 3.52dB and attains mean SSIM value of 0.9262, achieving a gain of 0.0723dB over the other methods when compared using the 6 main test images. Experimental results in the proposed study validate that the architecture substantially surpasses image compression frameworks, that utilized deblocking or denoising post- processing techniques, classified utilizing Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measures (SSIM) with a mean PSNR, SSIM and Compression Ratio of 38.45, 0.9602 and 1.75x respectively for the 50 test images, thus obtaining state-of-art performance for Quality Factor (QF)=5.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46074448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed F. Ashour, Ashraf A. M. Khalaf, Aziza I. Hussein, Hesham F. A. Hamed, A. Ramadan
The development of wireless technology in recent years has increased the demand for channel resources within a limited spectrum. The system's performance can be improved through bandwidth optimization, as the spectrum is a scarce resource. To reconstruct the signal, given incomplete knowledge about the original signal, signal reconstruction algorithms are needed. In this paper, we propose a new scheme for reducing the effect of adding additive white Gaussian noise (AWGN) using a noise reject filter (NRF) on a previously discussed algorithm for baseband signal transmission and reconstruction that can reconstruct most of the signal’s energy without any need to send most of the signal’s concentrated power like the conventional methods, thus achieving bandwidth optimization. The proposed scheme for noise reduction was tested for a pulse signal and stream of pulses with different rates (2, 4, 6, and 8 Mbps) and showed good reconstruction performance in terms of the normalized mean squared error (NMSE) and achieved an average enhancement of around 48%. The proposed schemes for signal reconstruction and noise reduction can be applied to different applications, such as ultra-wideband (UWB) communications, radio frequency identification (RFID) systems, mobile communication networks, and radar systems.
{"title":"Noise Effects on a Proposed Algorithm for Signal Reconstruction and Bandwidth Optimization","authors":"Ahmed F. Ashour, Ashraf A. M. Khalaf, Aziza I. Hussein, Hesham F. A. Hamed, A. Ramadan","doi":"10.32985/ijeces.14.5.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.2","url":null,"abstract":"The development of wireless technology in recent years has increased the demand for channel resources within a limited spectrum. The system's performance can be improved through bandwidth optimization, as the spectrum is a scarce resource. To reconstruct the signal, given incomplete knowledge about the original signal, signal reconstruction algorithms are needed. In this paper, we propose a new scheme for reducing the effect of adding additive white Gaussian noise (AWGN) using a noise reject filter (NRF) on a previously discussed algorithm for baseband signal transmission and reconstruction that can reconstruct most of the signal’s energy without any need to send most of the signal’s concentrated power like the conventional methods, thus achieving bandwidth optimization. The proposed scheme for noise reduction was tested for a pulse signal and stream of pulses with different rates (2, 4, 6, and 8 Mbps) and showed good reconstruction performance in terms of the normalized mean squared error (NMSE) and achieved an average enhancement of around 48%. The proposed schemes for signal reconstruction and noise reduction can be applied to different applications, such as ultra-wideband (UWB) communications, radio frequency identification (RFID) systems, mobile communication networks, and radar systems.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44684091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Guentri, A. Dahbi, T. Allaoui, S. Aoulmit, A. Bouraiou
The intermediate energy storage system is very necessary for the standalone multi-source renewable energy system to increase stability, reliability of supply, and power quality. Among the most practical energy storage solutions is combining supercapacitors and chemical batteries. However, the major problem in this kind of application is the design of the power management, as well as the control scheme of hybrid energy storage systems. The focal purpose of this paper is to develop a novel approach to control DC bus voltage based on the reference power's frequency decomposition. This paper uses a storage system combined of batteries and supercapacitors. These later are integrated in the multi-source renewable energy system to supply an AC load. This technique uses the low-pass filters' properties to control the DC bus voltage by balancing the generated green power and the fluctuating load. The hybrid storage system regulates power fluctuations by absorbing surplus power and providing required power. The results show good performances of the proposed control scheme, such as low battery current charge/discharge rates, lower current stress level on batteries, voltage control improvements, which lead to increase the battery life.
{"title":"Development of a Control Strategy for the Hybrid Energy Storage Systems in Standalone Microgrid","authors":"H. Guentri, A. Dahbi, T. Allaoui, S. Aoulmit, A. Bouraiou","doi":"10.32985/ijeces.14.5.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.9","url":null,"abstract":"The intermediate energy storage system is very necessary for the standalone multi-source renewable energy system to increase stability, reliability of supply, and power quality. Among the most practical energy storage solutions is combining supercapacitors and chemical batteries. However, the major problem in this kind of application is the design of the power management, as well as the control scheme of hybrid energy storage systems. The focal purpose of this paper is to develop a novel approach to control DC bus voltage based on the reference power's frequency decomposition. This paper uses a storage system combined of batteries and supercapacitors. These later are integrated in the multi-source renewable energy system to supply an AC load. This technique uses the low-pass filters' properties to control the DC bus voltage by balancing the generated green power and the fluctuating load. The hybrid storage system regulates power fluctuations by absorbing surplus power and providing required power. The results show good performances of the proposed control scheme, such as low battery current charge/discharge rates, lower current stress level on batteries, voltage control improvements, which lead to increase the battery life.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44553037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting anomalies in videos is a complex task due to diverse content, noisy labeling, and a lack of frame-level labeling. To address these challenges in weakly labeled datasets, we propose a novel custom loss function in conjunction with the multi-instance learning (MIL) algorithm. Our approach utilizes the UCF Crime and ShanghaiTech datasets for anomaly detection. The UCF Crime dataset includes labeled videos depicting a range of incidents such as explosions, assaults, and burglaries, while the ShanghaiTech dataset is one of the largest anomaly datasets, with over 400 video clips featuring three different scenes and 130 abnormal events. We generated pseudo labels for videos using the MIL technique to detect frame-level anomalies from video-level annotations, and to train the network to distinguish between normal and abnormal classes. We conducted extensive experiments on the UCF Crime dataset using C3D and I3D features to test our model's performance. For the ShanghaiTech dataset, we used I3D features for training and testing. Our results show that with I3D features, we achieve an 84.6% frame-level AUC score for the UCF Crime dataset and a 92.27% frame-level AUC score for the ShanghaiTech dataset, which are comparable to other methods used for similar datasets.
{"title":"Real-World Anomaly Detection in Video Using Spatio-Temporal Features Analysis for Weakly Labelled Data with Auto Label Generation","authors":"Rikin J. Nayak, Jitendra P. Chaudhari","doi":"10.32985/ijeces.14.5.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.8","url":null,"abstract":"Detecting anomalies in videos is a complex task due to diverse content, noisy labeling, and a lack of frame-level labeling. To address these challenges in weakly labeled datasets, we propose a novel custom loss function in conjunction with the multi-instance learning (MIL) algorithm. Our approach utilizes the UCF Crime and ShanghaiTech datasets for anomaly detection. The UCF Crime dataset includes labeled videos depicting a range of incidents such as explosions, assaults, and burglaries, while the ShanghaiTech dataset is one of the largest anomaly datasets, with over 400 video clips featuring three different scenes and 130 abnormal events. We generated pseudo labels for videos using the MIL technique to detect frame-level anomalies from video-level annotations, and to train the network to distinguish between normal and abnormal classes. We conducted extensive experiments on the UCF Crime dataset using C3D and I3D features to test our model's performance. For the ShanghaiTech dataset, we used I3D features for training and testing. Our results show that with I3D features, we achieve an 84.6% frame-level AUC score for the UCF Crime dataset and a 92.27% frame-level AUC score for the ShanghaiTech dataset, which are comparable to other methods used for similar datasets.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46170718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pravin Balaso Chopade, Prabhakar N. Kota, Bhagvat D. Jadhav, Pravin Marotrao Ghate, Shankar Dattatray Chavan
The ultimate goal of the Super-Resolution (SR) technique is to generate the High-Resolution (HR) image by combining the corresponding images with Low-Resolution (LR), which is utilized for different applications such as surveillance, remote sensing, medical diagnosis, etc. The original HR image may be corrupted due to various causes such as warping, blurring, and noise addition. SR image reconstruction methods are frequently plagued by obtrusive restorative artifacts such as noise, stair casing effect, and blurring. Thus, striking a balance between smoothness and edge retention is never easy. By enhancing the visual information and autonomous machine perception, this work presented research to improve the effectiveness of SR image reconstruction The reference image is obtained from DIV2K and BSD 100 dataset, these reference LR image is converted as composed LR image using the proposed Lucy Richardson and Modified Mean Wiener (LR-MMWF) Filters. The possessed LR image is provided as input for the stage of bicubic interpolation. Afterward, the initial HR image is obtained as output from the interpolation stage which is given as input for the SR model consisting of fidelity term to decrease residual between the projected HR image and detected LR image. At last, a model based on Bilateral Total Variation (BTV) prior is utilized to improve the stability of the HR image by refining the quality of the image. The results obtained from the performance analysis show that the proposed LR-MMW filter attained better PSNR and Structural Similarity (SSIM) than the existing filters. The results obtained from the experiments show that the proposed LR-MMW filter achieved better performance and provides a higher PSNR value of 31.65dB whereas the Filter-Net and 1D,2D CNN filter achieved PSNR values of 28.95dB and 31.63dB respectively.
{"title":"Lucy Richardson and Mean Modified Wiener Filter for Construction of Super-Resolution Image","authors":"Pravin Balaso Chopade, Prabhakar N. Kota, Bhagvat D. Jadhav, Pravin Marotrao Ghate, Shankar Dattatray Chavan","doi":"10.32985/ijeces.14.5.3","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.3","url":null,"abstract":"The ultimate goal of the Super-Resolution (SR) technique is to generate the High-Resolution (HR) image by combining the corresponding images with Low-Resolution (LR), which is utilized for different applications such as surveillance, remote sensing, medical diagnosis, etc. The original HR image may be corrupted due to various causes such as warping, blurring, and noise addition. SR image reconstruction methods are frequently plagued by obtrusive restorative artifacts such as noise, stair casing effect, and blurring. Thus, striking a balance between smoothness and edge retention is never easy. By enhancing the visual information and autonomous machine perception, this work presented research to improve the effectiveness of SR image reconstruction The reference image is obtained from DIV2K and BSD 100 dataset, these reference LR image is converted as composed LR image using the proposed Lucy Richardson and Modified Mean Wiener (LR-MMWF) Filters. The possessed LR image is provided as input for the stage of bicubic interpolation. Afterward, the initial HR image is obtained as output from the interpolation stage which is given as input for the SR model consisting of fidelity term to decrease residual between the projected HR image and detected LR image. At last, a model based on Bilateral Total Variation (BTV) prior is utilized to improve the stability of the HR image by refining the quality of the image. The results obtained from the performance analysis show that the proposed LR-MMW filter attained better PSNR and Structural Similarity (SSIM) than the existing filters. The results obtained from the experiments show that the proposed LR-MMW filter achieved better performance and provides a higher PSNR value of 31.65dB whereas the Filter-Net and 1D,2D CNN filter achieved PSNR values of 28.95dB and 31.63dB respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44622360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sattar Othman Hasan, Saman Khabbat Ezzulddin, Rashad Hassan Mahmud, Mowfaq Jalil Ahmed
In this article, different design configurations of rectangular microstrip patch antenna (RMSA) array operating at S-band frequency are presented. The substrate material utilized in the designs is Rogers-RT-5800 with dielectric permittivity (Ԑr= 2.2), thickness of (h=1.6 mm), and loss tangent of (δ = 0.009). The performances of a single element, (1×2), (2×2) and (1×4) array elements operating at (3.6 GHz) are investigated using the CST and HFSS numerical techniques. The simulation results indicates that the antenna gain of (8.68, 10.35, 10.43 and 10.52) dB, VSWR (1.045, 1.325, 1.095 and 1.945), return loss (-34.91, -17.15, -27.42 and -12.26) dB, and bandwidth (85.00, 200.00, 215 and 106.4) MHz are achieved with the implementation of HFSS for advanced single element, (1×2), (2×2) and (1×4) array elements, respectively. Besides, the corresponding antenna parameter values provided by CST are, gain (7.36, 9.8, 9.87 and 10.30) dB, VSWR (1.011, 1.304, 1.305 and 1.579), return loss (-44.97, -17.58, -17.55 and -14.01) dB, and bandwidth (92.28, 204, 229.49 and 129.12) MHz, respectively. The results also reveals that the higher gain and wider bandwidth are, respectively, achieved with (1×4) and (2×2) array configuration arrangement and with both simulation techniques. Additionally, a good agreement and an advancement between the obtained results with the ones previously studied for the same array types operating at S-band frequencies are also observed.
{"title":"Design and Simulation of Microstrip Antenna Array Operating at S-band for Wireless Communication System","authors":"Sattar Othman Hasan, Saman Khabbat Ezzulddin, Rashad Hassan Mahmud, Mowfaq Jalil Ahmed","doi":"10.32985/ijeces.14.5.1","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.1","url":null,"abstract":"In this article, different design configurations of rectangular microstrip patch antenna (RMSA) array operating at S-band frequency are presented. The substrate material utilized in the designs is Rogers-RT-5800 with dielectric permittivity (Ԑr= 2.2), thickness of (h=1.6 mm), and loss tangent of (δ = 0.009). The performances of a single element, (1×2), (2×2) and (1×4) array elements operating at (3.6 GHz) are investigated using the CST and HFSS numerical techniques. The simulation results indicates that the antenna gain of (8.68, 10.35, 10.43 and 10.52) dB, VSWR (1.045, 1.325, 1.095 and 1.945), return loss (-34.91, -17.15, -27.42 and -12.26) dB, and bandwidth (85.00, 200.00, 215 and 106.4) MHz are achieved with the implementation of HFSS for advanced single element, (1×2), (2×2) and (1×4) array elements, respectively. Besides, the corresponding antenna parameter values provided by CST are, gain (7.36, 9.8, 9.87 and 10.30) dB, VSWR (1.011, 1.304, 1.305 and 1.579), return loss (-44.97, -17.58, -17.55 and -14.01) dB, and bandwidth (92.28, 204, 229.49 and 129.12) MHz, respectively. The results also reveals that the higher gain and wider bandwidth are, respectively, achieved with (1×4) and (2×2) array configuration arrangement and with both simulation techniques. Additionally, a good agreement and an advancement between the obtained results with the ones previously studied for the same array types operating at S-band frequencies are also observed.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44654097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature selection is an essential preprocessing step for removing redundant or irrelevant features from multidimensional data to improve predictive performance. Currently, medical clinical datasets are increasingly large and multidimensional and not every feature helps in the necessary predictions. So, feature selection techniques are used to determine relevant feature set that can improve the performance of a learning algorithm. This study presents a performance analysis of a new filter and wrapper sequence involving the intersection of filter methods, Mutual Information and Chi-Square followed by one of the wrapper methods: Sequential Forward Selection and Sequential Backward Selection to obtain a more informative feature set for improved prediction of the survivability of breast cancer patients from the clinical breast cancer dataset, SEER. The improvement in performance due to this filter and wrapper sequence in terms of Accuracy, False Positive Rate, False Negative Rate and Area under the Receiver Operating Characteristics curve is tested using the Machine learning algorithms: Logistic Regression, K-Nearest Neighbour, Decision Tree, Random Forest, Support Vector Machine and Multilayer Perceptron. The performance analysis supports the Sequential Backward Selection of the new filter and wrapper sequence over Sequential Forward Selection for the SEER dataset.
{"title":"Performance Analysis of a new Filter and Wrapper Sequence for the Survivability Prediction of Breast Cancer Patients","authors":"E. J. Sweetlin, S. Saudia","doi":"10.32985/ijeces.14.5.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.6","url":null,"abstract":"Feature selection is an essential preprocessing step for removing redundant or irrelevant features from multidimensional data to improve predictive performance. Currently, medical clinical datasets are increasingly large and multidimensional and not every feature helps in the necessary predictions. So, feature selection techniques are used to determine relevant feature set that can improve the performance of a learning algorithm. This study presents a performance analysis of a new filter and wrapper sequence involving the intersection of filter methods, Mutual Information and Chi-Square followed by one of the wrapper methods: Sequential Forward Selection and Sequential Backward Selection to obtain a more informative feature set for improved prediction of the survivability of breast cancer patients from the clinical breast cancer dataset, SEER. The improvement in performance due to this filter and wrapper sequence in terms of Accuracy, False Positive Rate, False Negative Rate and Area under the Receiver Operating Characteristics curve is tested using the Machine learning algorithms: Logistic Regression, K-Nearest Neighbour, Decision Tree, Random Forest, Support Vector Machine and Multilayer Perceptron. The performance analysis supports the Sequential Backward Selection of the new filter and wrapper sequence over Sequential Forward Selection for the SEER dataset.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48472811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dahdouh Yousra, Anouar Boudhir Abdelhakim, Ben Ahmed Mohamed
Nowadays, skin cancer is one of the most important problems faced by the world, due especially to the rapid development of skin cells and excessive exposure to UV rays. Therefore, early detection at an early stage employing advanced automated systems based on AI algorithms plays a major job in order to effectively identifying and detecting the disease, reducing patient health and financial burdens, and stopping its spread in the skin. In this context, several early skin cancer detection approaches and models have been presented throughout the last few decades to improve the rate of skin cancer detection using dermoscopic images. This work proposed a model that can help dermatologists to know and detect skin cancer in just a few seconds. This model combined the merits of two major artificial intelligence algorithms: Deep Learning and Reinforcement Learning following the great success we achieved in the classification and recognition of images and especially in the medical sector. This research included four main steps. Firstly, the pre-processing techniques were applied to improve the accuracy, quality, and consistency of a dataset. The input dermoscopic images were obtained from the HAM10000 database. Then, the watershed algorithm was used for the segmentation process performed to extract the affected area. After that, the deep convolutional neural network (CNN) was utilized to classify the skin cancer into seven types: actinic keratosis, basal cell carcinoma, benign keratosis, dermatofibroma melanocytic nevi, melanoma vascular skin lesions. Finally, in regards to the reinforcement learning part, the Deep Q_Learning algorithm was utilized to train and retrain our model until we found the best result. The accuracy metric was utilized to evaluate the efficacy and performance of the proposed method, which achieved a high accuracy of 80%. Furthermore, the experimental results demonstrate how reinforcement learning can be effectively combined with deep learning for skin cancer classification tasks.
{"title":"A New Approach using Deep Learning and Reinforcement Learning in HealthCare","authors":"Dahdouh Yousra, Anouar Boudhir Abdelhakim, Ben Ahmed Mohamed","doi":"10.32985/ijeces.14.5.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.7","url":null,"abstract":"Nowadays, skin cancer is one of the most important problems faced by the world, due especially to the rapid development of skin cells and excessive exposure to UV rays. Therefore, early detection at an early stage employing advanced automated systems based on AI algorithms plays a major job in order to effectively identifying and detecting the disease, reducing patient health and financial burdens, and stopping its spread in the skin. In this context, several early skin cancer detection approaches and models have been presented throughout the last few decades to improve the rate of skin cancer detection using dermoscopic images. This work proposed a model that can help dermatologists to know and detect skin cancer in just a few seconds. This model combined the merits of two major artificial intelligence algorithms: Deep Learning and Reinforcement Learning following the great success we achieved in the classification and recognition of images and especially in the medical sector. This research included four main steps. Firstly, the pre-processing techniques were applied to improve the accuracy, quality, and consistency of a dataset. The input dermoscopic images were obtained from the HAM10000 database. Then, the watershed algorithm was used for the segmentation process performed to extract the affected area. After that, the deep convolutional neural network (CNN) was utilized to classify the skin cancer into seven types: actinic keratosis, basal cell carcinoma, benign keratosis, dermatofibroma melanocytic nevi, melanoma vascular skin lesions. Finally, in regards to the reinforcement learning part, the Deep Q_Learning algorithm was utilized to train and retrain our model until we found the best result. The accuracy metric was utilized to evaluate the efficacy and performance of the proposed method, which achieved a high accuracy of 80%. Furthermore, the experimental results demonstrate how reinforcement learning can be effectively combined with deep learning for skin cancer classification tasks.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49241175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayesian statistics is incorporated into a neural network to create a Bayesian neural network (BNN) that adds posterior inference aims at preventing overfitting. BNNs are frequently used in medical image segmentation because they provide a stochastic viewpoint of segmentation approaches by producing a posterior probability with conventional limitations and allowing the depiction of uncertainty over following distributions. However, the actual efficacy of BNNs is constrained by the difficulty in selecting expressive discretization and accepting suitable following disseminations in a higher-order domain. Functional discretization BNN using Gaussian processes (GPs) that analyze medical image segmentation is proposed in this paper. Here, a discretization inference has been assumed in the functional domain by considering the former and dynamic consequent distributions to be GPs. An upsampling operator that utilizes a content-based feature extraction has been proposed. This is an adaptive method for extracting features after feature mapping is used in conjunction with the functional evidence lower bound and weights. This results in a loss-aware segmentation network that achieves an F1-score of 91.54%, accuracy of 90.24%, specificity of 88.54%, and precision of 80.24%.
{"title":"Segmentation of Medical Images with Adaptable Multifunctional Discretization Bayesian Neural Networks and Gaussian Operation","authors":"G. Ramalingam, Selvakumaran Selvaraj, Visumathi James, Senthil Kumar Saravanaperumal, Buvaneswari Mohanram","doi":"10.32985/ijeces.14.4.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.4.2","url":null,"abstract":"Bayesian statistics is incorporated into a neural network to create a Bayesian neural network (BNN) that adds posterior inference aims at preventing overfitting. BNNs are frequently used in medical image segmentation because they provide a stochastic viewpoint of segmentation approaches by producing a posterior probability with conventional limitations and allowing the depiction of uncertainty over following distributions. However, the actual efficacy of BNNs is constrained by the difficulty in selecting expressive discretization and accepting suitable following disseminations in a higher-order domain. Functional discretization BNN using Gaussian processes (GPs) that analyze medical image segmentation is proposed in this paper. Here, a discretization inference has been assumed in the functional domain by considering the former and dynamic consequent distributions to be GPs. An upsampling operator that utilizes a content-based feature extraction has been proposed. This is an adaptive method for extracting features after feature mapping is used in conjunction with the functional evidence lower bound and weights. This results in a loss-aware segmentation network that achieves an F1-score of 91.54%, accuracy of 90.24%, specificity of 88.54%, and precision of 80.24%.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43393806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-defined networking (SDN) provides increased flexibility to network management through distributed SDN control, and it has been a great breakthrough in network innovation. Switch migration is extensively used for workload balancing among distributed controllers. The time-sharing switch migration (TSSM) scheme proposes a strategy in which more than one controller is allowed to share the workload of a switch via time sharing during overloaded conditions, resulting in the mitigation of ping-pong controller difficulty, a reduced number of overload occurrences, and better controller efficiency. However, it has increased migration costs and higher controller resource consumption during the TSSM operation period because it requires more than one controller to perform. Therefore, we have proposed a strategy that optimizes the controller selection during the TSSM period based on flow characteristics through a greedy set coverage algorithm. The improved TSSM scheme provides reduced migration costs and lower controller resource consumption, as well as TSSM benefits. For its feasibility, the implementation of the proposed scheme is accomplished through an open network operating system. The experimental results show that the proposed improved TSSM scheme reduces the migration cost and lowers the controller resource consumption by about 36% and 34%, respectively, as compared with the conventional TSSM scheme.
{"title":"An Efficient Switch Migration Scheme for Load Balancing in Software Defined Networking","authors":"Thangaraj Ethilu, Abirami Sathappan, P. Rodrigues","doi":"10.32985/ijeces.14.4.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.4.8","url":null,"abstract":"Software-defined networking (SDN) provides increased flexibility to network management through distributed SDN control, and it has been a great breakthrough in network innovation. Switch migration is extensively used for workload balancing among distributed controllers. The time-sharing switch migration (TSSM) scheme proposes a strategy in which more than one controller is allowed to share the workload of a switch via time sharing during overloaded conditions, resulting in the mitigation of ping-pong controller difficulty, a reduced number of overload occurrences, and better controller efficiency. However, it has increased migration costs and higher controller resource consumption during the TSSM operation period because it requires more than one controller to perform. Therefore, we have proposed a strategy that optimizes the controller selection during the TSSM period based on flow characteristics through a greedy set coverage algorithm. The improved TSSM scheme provides reduced migration costs and lower controller resource consumption, as well as TSSM benefits. For its feasibility, the implementation of the proposed scheme is accomplished through an open network operating system. The experimental results show that the proposed improved TSSM scheme reduces the migration cost and lowers the controller resource consumption by about 36% and 34%, respectively, as compared with the conventional TSSM scheme.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46733834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}