Smriti Mishra, Ranjan Kumar, S. K. Tiwari, Priya Ranjan
The COVID-19 pandemic has highlighted the importance of accurately predicting disease severity to ensure timely intervention and effective allocation of healthcare resources, which can ultimately improve patient outcomes. This study aims to develop an efficient machine learning (ML) model based on patient demographic and clinical data. It utilizes advanced feature engineering techniques to reduce the dimensionality of dataset and address the issue of highly imbalanced data using synthetic minority oversampling technique (SMOTE). The study employs several ensemble learning models, including XGBoost, Random Forest, AdaBoost, voting ensemble, enhanced-weighted voting ensemble, and stack-based ensembles with support vector machine (SVM) and Gaussian Naïve Bayes as meta-learners, to develop the proposed model. The results indicate that the proposed model outperformed the top-performing models reported in previous studies. It achieved an accuracy of 0.978, sensitivity of 1.0, precision of 0.875, F1-score of 0.934, and receiver operating characteristic area under the curve (ROC-AUC) of 0.965. The study identified several features that significantly correlated with COVID-19 severity, which included respiratory rate (breaths per minute), c-reactive proteins, age, and total leukocyte count (TLC) count. The proposed approach presents a promising method for accurate COVID-19 severity prediction, which may prove valuable in assisting healthcare providers in making informed decisions about patient care.
{"title":"An efficient synthetic minority oversampling technique-based ensemble learning model to detect COVID-19 severity","authors":"Smriti Mishra, Ranjan Kumar, S. K. Tiwari, Priya Ranjan","doi":"10.11591/eei.v13i3.6774","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6774","url":null,"abstract":"The COVID-19 pandemic has highlighted the importance of accurately predicting disease severity to ensure timely intervention and effective allocation of healthcare resources, which can ultimately improve patient outcomes. This study aims to develop an efficient machine learning (ML) model based on patient demographic and clinical data. It utilizes advanced feature engineering techniques to reduce the dimensionality of dataset and address the issue of highly imbalanced data using synthetic minority oversampling technique (SMOTE). The study employs several ensemble learning models, including XGBoost, Random Forest, AdaBoost, voting ensemble, enhanced-weighted voting ensemble, and stack-based ensembles with support vector machine (SVM) and Gaussian Naïve Bayes as meta-learners, to develop the proposed model. The results indicate that the proposed model outperformed the top-performing models reported in previous studies. It achieved an accuracy of 0.978, sensitivity of 1.0, precision of 0.875, F1-score of 0.934, and receiver operating characteristic area under the curve (ROC-AUC) of 0.965. The study identified several features that significantly correlated with COVID-19 severity, which included respiratory rate (breaths per minute), c-reactive proteins, age, and total leukocyte count (TLC) count. The proposed approach presents a promising method for accurate COVID-19 severity prediction, which may prove valuable in assisting healthcare providers in making informed decisions about patient care.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"10 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141230529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Román Lara-Cueva, Edwin Sebastián Yandún-Imbaquingo, Elvis D. Bustamante-Lucio
Low power wide area network (LPWAN) technology has expanded and is essential in the development of applications for the internet of things (IoT). The Sigfox LPWAN network is characterized by its long-range coverage, low cost and power consumption. In this article, a set of 5174 values is analyzed, containing 1606 null RSSI data, obtained with the Sipy module and MicroPython, which provide a coverage map of several points with a resolution of 200 meters deployed in Quito–Ecuador. It is evaluated the type of distribution to which the set of network measurements is adjusted and an optimal 900 MHz propagation model in suburban environments is determined from the measurements obtained from the known base station. As a result, the lost values of RSSI were predicted using the inverse normal distribution method in the original values, observing that they conform to a logistic distribution. The data from the base station were subjected to a data augmentation algorithm designed in MATLAB, determining that the stanford university interim (SUI) model reduces the precision error in the trend of the curve by not presenting changes greater than 5 dB, achieving a precision of 97% with respect to the fit of the curve of the data.
{"title":"Description and analysis of Sigfox received signal strength indicator dataset by using statistical techniques","authors":"Román Lara-Cueva, Edwin Sebastián Yandún-Imbaquingo, Elvis D. Bustamante-Lucio","doi":"10.11591/eei.v13i3.6862","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6862","url":null,"abstract":"Low power wide area network (LPWAN) technology has expanded and is essential in the development of applications for the internet of things (IoT). The Sigfox LPWAN network is characterized by its long-range coverage, low cost and power consumption. In this article, a set of 5174 values is analyzed, containing 1606 null RSSI data, obtained with the Sipy module and MicroPython, which provide a coverage map of several points with a resolution of 200 meters deployed in Quito–Ecuador. It is evaluated the type of distribution to which the set of network measurements is adjusted and an optimal 900 MHz propagation model in suburban environments is determined from the measurements obtained from the known base station. As a result, the lost values of RSSI were predicted using the inverse normal distribution method in the original values, observing that they conform to a logistic distribution. The data from the base station were subjected to a data augmentation algorithm designed in MATLAB, determining that the stanford university interim (SUI) model reduces the precision error in the trend of the curve by not presenting changes greater than 5 dB, achieving a precision of 97% with respect to the fit of the curve of the data.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141234836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computed tomography (CT) films are used to construct cross-sectional pictures of a particular region of the body by using many x-ray readings that were obtained at various angles. There is a general agreement in the medical community at this time that chest CT is the most accurate approach for identifying COVID-19 disease. It was demonstrated that chest CT had a higher sensitivity than reverse transcription polymerase chain reaction (RT-PCR) for the detection of COVID-19 illness. This article presents gray-level co-occurrence matrix (GLCM) texture feature extraction and convolutional neural network (CNN)-enabled optimized diagnostic model for COVID-19 detection. In this diagnostic model, CT scan images of patients are given as input. Firstly, GLCM algorithm is used to extract texture features from the CT scan images. This feature extraction helps in achieving higher classification accuracy. Classification is performed using CNN. It achieves higher accuracy than the k-nearest neighbors (KNN) algorithm and multi-layer preceptor (MLP). The accuracy of GLCM based CNN is 99%, F1 score is 99% and the recall rate is also 98%. CNN has achieved better results than MLP and KNN algorithms for COVID-19 detection.
{"title":"Enhanced convolutional neural network enabled optimized diagnostic model for COVID-19 detection","authors":"Aaron Meiyyappan Arul Raj, Sugumar Rajendran, Georgewilliam Sundaram Annie Grace Vimal","doi":"10.11591/eei.v13i3.6393","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6393","url":null,"abstract":"Computed tomography (CT) films are used to construct cross-sectional pictures of a particular region of the body by using many x-ray readings that were obtained at various angles. There is a general agreement in the medical community at this time that chest CT is the most accurate approach for identifying COVID-19 disease. It was demonstrated that chest CT had a higher sensitivity than reverse transcription polymerase chain reaction (RT-PCR) for the detection of COVID-19 illness. This article presents gray-level co-occurrence matrix (GLCM) texture feature extraction and convolutional neural network (CNN)-enabled optimized diagnostic model for COVID-19 detection. In this diagnostic model, CT scan images of patients are given as input. Firstly, GLCM algorithm is used to extract texture features from the CT scan images. This feature extraction helps in achieving higher classification accuracy. Classification is performed using CNN. It achieves higher accuracy than the k-nearest neighbors (KNN) algorithm and multi-layer preceptor (MLP). The accuracy of GLCM based CNN is 99%, F1 score is 99% and the recall rate is also 98%. CNN has achieved better results than MLP and KNN algorithms for COVID-19 detection.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141231219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soil moisture (SM) is a crucial criterion for agronomics and the management of water resources, particularly in areas where the socio-economic status and significant source of income depend upon agriculture and related sectors. This paper intends to estimate SM over the vegetative area using a generalized regression neural network (GRNN) and ground scatterometer and compare the results with SM retrieved using Sentinel-1 data. At the same time, random forest regression (RFR) and support vector regression (SVR) models are used for SM estimation. Correlation analysis results concluded that L-band HV-polarization at 300 incidence angle showed the highest correlation with the measured field parameters. This study investigated backscattering coefficients, VV/VH polarization ratio and polarization phase difference over wheat’s entire growth phase to estimate SM. The results indicate that the GRNN with backscattering coefficients and polarization ratio provided the highest accuracy compared to the random forest (RF) and SVR with the root mean square error of 0.093 over the Yavatmal District, Maharashtra, India.
{"title":"Soil moisture estimation using ground scatterometer and Sentinel-1 data","authors":"Geeta T. Desai, Abhay N. Gaikwad","doi":"10.11591/eei.v13i3.6433","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6433","url":null,"abstract":"Soil moisture (SM) is a crucial criterion for agronomics and the management of water resources, particularly in areas where the socio-economic status and significant source of income depend upon agriculture and related sectors. This paper intends to estimate SM over the vegetative area using a generalized regression neural network (GRNN) and ground scatterometer and compare the results with SM retrieved using Sentinel-1 data. At the same time, random forest regression (RFR) and support vector regression (SVR) models are used for SM estimation. Correlation analysis results concluded that L-band HV-polarization at 300 incidence angle showed the highest correlation with the measured field parameters. This study investigated backscattering coefficients, VV/VH polarization ratio and polarization phase difference over wheat’s entire growth phase to estimate SM. The results indicate that the GRNN with backscattering coefficients and polarization ratio provided the highest accuracy compared to the random forest (RF) and SVR with the root mean square error of 0.093 over the Yavatmal District, Maharashtra, India.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"16 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141234172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spread of influenza is contingent upon a multitude of outbreak-related factors, including viral mutation, climate conditions, acquisition of immunity, crowded environments, vaccine efficacy, social gatherings, and the health and age profiles of individuals in contact with infected individuals. An epidemic in the region impacted by spatial transmission risk from adjacent regions. A few influenzas epidemic models start highlighting the spatial correlations between influenza patients and geographically adjacent regions. The proposed model is based on the concept of climatic, immunization, and spatial correlations which are represented by a convolution neural network (CNN) for influenza epidemic forecasting. This study presents an integration of three determinants for predicting influenza outbreaks, multivariate climate data, spatial data on influenza vaccination, and spatial-temporal data of historical influenza patients. The performance of three comparison models, CNN, recurrent neural network (RNN), and long short-term memory (LSTM) was compared by the root mean squared error metric (RMSE). The findings revealed that the CNN model represents human interaction at intervals of 12, 16, 20, 24, and 28 weeks resulting in the best effectiveness of the lowest RMSE=0.00376 with learning rate=0.0001.
{"title":"A convolution neural network integrating climate variables and spatial-temporal properties to predict influenza trends","authors":"Jaroonsak Watmaha, Suwatchai Kamonsantiroj, Luepol Pipanmaekaporn","doi":"10.11591/eei.v13i3.6619","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6619","url":null,"abstract":"The spread of influenza is contingent upon a multitude of outbreak-related factors, including viral mutation, climate conditions, acquisition of immunity, crowded environments, vaccine efficacy, social gatherings, and the health and age profiles of individuals in contact with infected individuals. An epidemic in the region impacted by spatial transmission risk from adjacent regions. A few influenzas epidemic models start highlighting the spatial correlations between influenza patients and geographically adjacent regions. The proposed model is based on the concept of climatic, immunization, and spatial correlations which are represented by a convolution neural network (CNN) for influenza epidemic forecasting. This study presents an integration of three determinants for predicting influenza outbreaks, multivariate climate data, spatial data on influenza vaccination, and spatial-temporal data of historical influenza patients. The performance of three comparison models, CNN, recurrent neural network (RNN), and long short-term memory (LSTM) was compared by the root mean squared error metric (RMSE). The findings revealed that the CNN model represents human interaction at intervals of 12, 16, 20, 24, and 28 weeks resulting in the best effectiveness of the lowest RMSE=0.00376 with learning rate=0.0001.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"25 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141234926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a robust optimal control approach for the wheel mobile robot system, which considers the effects of external disturbances, uncertainties, and wheel slipping. The proposed method utilizes an adaptive dynamic programming (ADP) technique in conjunction with a disturbance observer. Initially, the system's state space model is formulated through the utilization of kinematic and dynamic models. Subsequently, the ADP method is employed to establish an online adaptive optimal controller, which solely relies on a single neural network for the purpose of function approximation. The utilization of the disturbance observer in conjunction with the compensation controller serves to alleviate the effects of disturbances. The Lyapunov theorem establishes the stability of the complete closed-loop system and the convergence of the weights of the neural network. The proposed approach has been shown to be effective through simulation under the effect of the disturbances and the change of the desired trajectory.
{"title":"Robust optimal control for uncertain wheeled mobile robot based on reinforcement learning: ADP approach","authors":"Hoa Van Doan, Nga Thi-Thuy Vu","doi":"10.11591/eei.v13i3.7054","DOIUrl":"https://doi.org/10.11591/eei.v13i3.7054","url":null,"abstract":"This paper presents a robust optimal control approach for the wheel mobile robot system, which considers the effects of external disturbances, uncertainties, and wheel slipping. The proposed method utilizes an adaptive dynamic programming (ADP) technique in conjunction with a disturbance observer. Initially, the system's state space model is formulated through the utilization of kinematic and dynamic models. Subsequently, the ADP method is employed to establish an online adaptive optimal controller, which solely relies on a single neural network for the purpose of function approximation. The utilization of the disturbance observer in conjunction with the compensation controller serves to alleviate the effects of disturbances. The Lyapunov theorem establishes the stability of the complete closed-loop system and the convergence of the weights of the neural network. The proposed approach has been shown to be effective through simulation under the effect of the disturbances and the change of the desired trajectory.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"2 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141230430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rise of health awareness, pharmaceutical and cosmetic products should be verified to protect ourselves from health risks. MyPharmaceutical is a proof-of-concept proposed to provide a mobile application for users to carry out product verification and reporting and a web application for administrative purposes. The data on the registered pharmaceutical and cosmetic products were extracted from national pharmaceutical regulatory agency (NPRA) website. MyPharmaceutical mobile application provides functionalities such as searching the registered product, bookmarking products, reporting products, and tracking report status. The mobile application also implemented a barcode scanner feature to provide ease of product verification. A named entity recognition algorithm is applied with the NLP.js library to provide an improved product search feature for the users, where products can be searched with multiple search criteria in a single input. The web application is proposed to support the mobile application, where the NPRA data admins and officers can manage reported products, publish announcements, verify product data, and utilize the analytic dashboard. The system proposed is expected to provide ease of product verification and reporting to assist the public in choosing safe registered products and a platform for NPRA to manage data and deliver information to the users.
{"title":"MyPharmaceutical: an interactive proof of concept","authors":"Khor Ying Jie, Z. Zaaba, Mohd Adib Omar","doi":"10.11591/eei.v13i3.5896","DOIUrl":"https://doi.org/10.11591/eei.v13i3.5896","url":null,"abstract":"With the rise of health awareness, pharmaceutical and cosmetic products should be verified to protect ourselves from health risks. MyPharmaceutical is a proof-of-concept proposed to provide a mobile application for users to carry out product verification and reporting and a web application for administrative purposes. The data on the registered pharmaceutical and cosmetic products were extracted from national pharmaceutical regulatory agency (NPRA) website. MyPharmaceutical mobile application provides functionalities such as searching the registered product, bookmarking products, reporting products, and tracking report status. The mobile application also implemented a barcode scanner feature to provide ease of product verification. A named entity recognition algorithm is applied with the NLP.js library to provide an improved product search feature for the users, where products can be searched with multiple search criteria in a single input. The web application is proposed to support the mobile application, where the NPRA data admins and officers can manage reported products, publish announcements, verify product data, and utilize the analytic dashboard. The system proposed is expected to provide ease of product verification and reporting to assist the public in choosing safe registered products and a platform for NPRA to manage data and deliver information to the users.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"29 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141234906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Murni Marbun, O. S. Sitompul, E. Nababan, Poltak Sihombing
The main weakness of complex and sizeable fuzzy rule systems is the complexity of data interpretation in terms of classification. Classification interpretation can be affected by reducing rules and removing important rules for several reasons. Based on the results of experiments using the fuzzy grid partition (FGP) approach for high-dimensional data, the difficulty in generating many fuzzy rules still increases exponentially as the number of characteristics increases. The solution to this problem is a hybrid method that combines the advantages of the rough set method and the FGP method, which is called the fuzzy grid partition rough set (FGPRS) method. In the Irish data, the rough set approach reduces the number of characteristics and objects so that data with excessive values can be minimized, and the fuzzy rules produced using the FGP method are more concise. The number of fuzzy rules produced using the FGPRS method at K=2 is 50%; at K=K+1, it is reduced by 66.7% and at K=2 K, it is reduced by 75%. Based on the findings of the data collection classification test, the FGPRS method has a classification accuracy rate of 83.33%, and all data can be classified.
{"title":"Development of the fuzzy grid partition methods in generating fuzzy rules for the classification of data set","authors":"Murni Marbun, O. S. Sitompul, E. Nababan, Poltak Sihombing","doi":"10.11591/eei.v13i3.5378","DOIUrl":"https://doi.org/10.11591/eei.v13i3.5378","url":null,"abstract":"The main weakness of complex and sizeable fuzzy rule systems is the complexity of data interpretation in terms of classification. Classification interpretation can be affected by reducing rules and removing important rules for several reasons. Based on the results of experiments using the fuzzy grid partition (FGP) approach for high-dimensional data, the difficulty in generating many fuzzy rules still increases exponentially as the number of characteristics increases. The solution to this problem is a hybrid method that combines the advantages of the rough set method and the FGP method, which is called the fuzzy grid partition rough set (FGPRS) method. In the Irish data, the rough set approach reduces the number of characteristics and objects so that data with excessive values can be minimized, and the fuzzy rules produced using the FGP method are more concise. The number of fuzzy rules produced using the FGPRS method at K=2 is 50%; at K=K+1, it is reduced by 66.7% and at K=2 K, it is reduced by 75%. Based on the findings of the data collection classification test, the FGPRS method has a classification accuracy rate of 83.33%, and all data can be classified.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"4 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141230644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trung Dung Nguyen, Trung Kien Pham, Chi Kien Ha, Long Ho Le, Thanh Quyen Ngo, Hoanh Nguyen
Unmanned aerial vehicles (UAVs) have gained significant popularity in recent years due to their ability to capture high-resolution aerial imagery for various applications, including traffic monitoring, urban planning, and disaster management. Accurate road and vehicle segmentation from UAV imagery plays a crucial role in these applications. In this paper, we propose a novel approach combining dual attention mechanisms and efficient multi-layer feature aggregation to enhance the performance of road and vehicle segmentation from UAV imagery. Our approach integrates a spatial attention mechanism and a channel-wise attention mechanism to enable the model to selectively focus on relevant features for segmentation tasks. In conjunction with these attention mechanisms, we introduce an efficient multi-layer feature aggregation method that synthesizes and integrates multi-scale features at different levels of the network, resulting in a more robust and informative feature representation. Our proposed method is evaluated on the UAVid semantic segmentation dataset, showcasing its exceptional performance in comparison to renowned approaches such as U-Net, DeepLabv3+, and SegNet. The experimental results affirm that our approach surpasses these state-of-the-art methods in terms of segmentation accuracy.
{"title":"Combining dual attention mechanism and efficient feature aggregation for road and vehicle segmentation from UAV imagery","authors":"Trung Dung Nguyen, Trung Kien Pham, Chi Kien Ha, Long Ho Le, Thanh Quyen Ngo, Hoanh Nguyen","doi":"10.11591/eei.v13i3.6742","DOIUrl":"https://doi.org/10.11591/eei.v13i3.6742","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have gained significant popularity in recent years due to their ability to capture high-resolution aerial imagery for various applications, including traffic monitoring, urban planning, and disaster management. Accurate road and vehicle segmentation from UAV imagery plays a crucial role in these applications. In this paper, we propose a novel approach combining dual attention mechanisms and efficient multi-layer feature aggregation to enhance the performance of road and vehicle segmentation from UAV imagery. Our approach integrates a spatial attention mechanism and a channel-wise attention mechanism to enable the model to selectively focus on relevant features for segmentation tasks. In conjunction with these attention mechanisms, we introduce an efficient multi-layer feature aggregation method that synthesizes and integrates multi-scale features at different levels of the network, resulting in a more robust and informative feature representation. Our proposed method is evaluated on the UAVid semantic segmentation dataset, showcasing its exceptional performance in comparison to renowned approaches such as U-Net, DeepLabv3+, and SegNet. The experimental results affirm that our approach surpasses these state-of-the-art methods in terms of segmentation accuracy.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"32 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141234297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yahaya Zakariyau Bala, Pathiah Abdul Samat, Khaironi Yatim Sharif, N. Manshor
Cross-project defect prediction is a method that predicts defects in one software project by using the historical record of another software project. Due to distribution differences and the weak classifier used to build the prediction model, this method has poor prediction performance. Cross-project defect prediction may perform better if distribution differences are reduced, and an appropriate individual classifier is chosen. However, the prediction performance of individual classifiers may be affected in some way by their weaknesses. As a result, in order to boost the accuracy of cross-project defect prediction predictions, this study proposed a strategy that makes use of multiple classifiers and selects attributes that are similar to one another. The proposed method's efficacy was tested using the Relink and AEEEM datasets in an experiment. The findings of the experiments demonstrated that the proposed method produces superior outcomes. To further validate the method, we employed the Wilcoxon sum rank test at 95% significance level. The approach was found to perform significantly better than the baseline methods.
{"title":"Cross-project software defect prediction through multiple learning","authors":"Yahaya Zakariyau Bala, Pathiah Abdul Samat, Khaironi Yatim Sharif, N. Manshor","doi":"10.11591/eei.v13i3.5258","DOIUrl":"https://doi.org/10.11591/eei.v13i3.5258","url":null,"abstract":"Cross-project defect prediction is a method that predicts defects in one software project by using the historical record of another software project. Due to distribution differences and the weak classifier used to build the prediction model, this method has poor prediction performance. Cross-project defect prediction may perform better if distribution differences are reduced, and an appropriate individual classifier is chosen. However, the prediction performance of individual classifiers may be affected in some way by their weaknesses. As a result, in order to boost the accuracy of cross-project defect prediction predictions, this study proposed a strategy that makes use of multiple classifiers and selects attributes that are similar to one another. The proposed method's efficacy was tested using the Relink and AEEEM datasets in an experiment. The findings of the experiments demonstrated that the proposed method produces superior outcomes. To further validate the method, we employed the Wilcoxon sum rank test at 95% significance level. The approach was found to perform significantly better than the baseline methods.","PeriodicalId":502860,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"2 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141229065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}