Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141034
Hang Duong Thi, Kha Hoang Manh, Vu Trinh Anh, Trang Pham Thi Quynh, Tuyen Nguyen Viet
—Indoor localization presents formidable challenges across diverse sectors, encompassing indoor navigation and asset tracking. In this study, we introduce an inventive indoor localization methodology that combines Truncated Singular Value Decomposition (Truncated SVD) for dimensionality reduction with the K-Nearest Neighbors Regressor (KNN Regression) for precise position prediction. The central objective of this proposed technique is to mitigate the complexity of high-dimensional input data while preserving critical information essential for achieving accurate localization outcomes. To validate the effectiveness of our approach, we conducted an extensive empirical evaluation employing a publicly accessible dataset. This dataset covers a wide spectrum of indoor environments, facilitating a comprehensive assessment. The performance evaluation metrics adopted encompass the Root Mean Squared Error (RMSE) and the Euclidean distance error (EDE)—widely embraced in the field of localization. Importantly, the simulated results demonstrated promising performance, yielding an RMSE of 1.96 meters and an average EDE of 2.23 meters. These results surpass the achievements of prevailing state-of-the-art techniques, which typically attain localization accuracies ranging from 2.5 meters to 2.7 meters using the same dataset. The enhanced accuracy in localization can be attributed to the synergy between Truncated SVD's dimensionality reduction and the proficiency of KNN Regression in capturing intricate spatial relationships among data points. Our proposed approach highlights its potential to deliver heightened precision in indoor localization outcomes, with immediate relevance to real-time scenarios. Future research endeavors involving comprehensive comparative analyses with advanced techniques hold promise in propelling the field of accurate indoor localization solutions forward.
{"title":"Dimensionality Reduction with Truncated Singular Value Decomposition and K-Nearest Neighbors Regression for Indoor Localization","authors":"Hang Duong Thi, Kha Hoang Manh, Vu Trinh Anh, Trang Pham Thi Quynh, Tuyen Nguyen Viet","doi":"10.14569/ijacsa.2023.0141034","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141034","url":null,"abstract":"—Indoor localization presents formidable challenges across diverse sectors, encompassing indoor navigation and asset tracking. In this study, we introduce an inventive indoor localization methodology that combines Truncated Singular Value Decomposition (Truncated SVD) for dimensionality reduction with the K-Nearest Neighbors Regressor (KNN Regression) for precise position prediction. The central objective of this proposed technique is to mitigate the complexity of high-dimensional input data while preserving critical information essential for achieving accurate localization outcomes. To validate the effectiveness of our approach, we conducted an extensive empirical evaluation employing a publicly accessible dataset. This dataset covers a wide spectrum of indoor environments, facilitating a comprehensive assessment. The performance evaluation metrics adopted encompass the Root Mean Squared Error (RMSE) and the Euclidean distance error (EDE)—widely embraced in the field of localization. Importantly, the simulated results demonstrated promising performance, yielding an RMSE of 1.96 meters and an average EDE of 2.23 meters. These results surpass the achievements of prevailing state-of-the-art techniques, which typically attain localization accuracies ranging from 2.5 meters to 2.7 meters using the same dataset. The enhanced accuracy in localization can be attributed to the synergy between Truncated SVD's dimensionality reduction and the proficiency of KNN Regression in capturing intricate spatial relationships among data points. Our proposed approach highlights its potential to deliver heightened precision in indoor localization outcomes, with immediate relevance to real-time scenarios. Future research endeavors involving comprehensive comparative analyses with advanced techniques hold promise in propelling the field of accurate indoor localization solutions forward.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134882317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141045
Aljwharah Alabdulwahab, Mohd Anul Haq, Mohammed Alshehri
—With the human passion for gaining knowledge, learning new things and knowing the news that surrounds the world, social networks were invented to serve the human need, which resulted in the rapid spread and use among people, but social networks have a dark and bright side. The dark side is that strangers or anonymous people harass some users with obscene words that the user feels wrong about, which leads to psychological harm to him, and here we try to discover how to discover electronic bullying to block this alarming phenomenon. In this context, the utility of Natural Language Processing (NLP) is employed in the present investigation to detect electronic bullying and address this alarming phenomenon. The machine learning (ML) method is moderated based on specific features or criteria for detecting cyberbullying on social media. The collected characteristics were analyzed using the K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Naive Bayes (NB), Decision Trees (DT), and Random Forest (RF) methods. Naturally, there are test results that use or operate on the proposed framework in a multi-category setting and are encouraged by kappa, classifier accuracy, and f-measure standards. These apparent outcomes show that the suggested model is a valuable method for predicting the behavior of cyberbullying, its strength, and its impact on social networks via the Internet. In the end, we evaluated the results of the proposed and basic features with machine learning techniques, which shows us the importance and effectiveness of the proposed features for detecting cyberbullying. We evaluated the models, and we got the accuracy of the KNN (0,90), SVM (0,92), and Deep learning (0,96)
{"title":"Cyberbullying Detection using Machine Learning and Deep Learning","authors":"Aljwharah Alabdulwahab, Mohd Anul Haq, Mohammed Alshehri","doi":"10.14569/ijacsa.2023.0141045","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141045","url":null,"abstract":"—With the human passion for gaining knowledge, learning new things and knowing the news that surrounds the world, social networks were invented to serve the human need, which resulted in the rapid spread and use among people, but social networks have a dark and bright side. The dark side is that strangers or anonymous people harass some users with obscene words that the user feels wrong about, which leads to psychological harm to him, and here we try to discover how to discover electronic bullying to block this alarming phenomenon. In this context, the utility of Natural Language Processing (NLP) is employed in the present investigation to detect electronic bullying and address this alarming phenomenon. The machine learning (ML) method is moderated based on specific features or criteria for detecting cyberbullying on social media. The collected characteristics were analyzed using the K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Naive Bayes (NB), Decision Trees (DT), and Random Forest (RF) methods. Naturally, there are test results that use or operate on the proposed framework in a multi-category setting and are encouraged by kappa, classifier accuracy, and f-measure standards. These apparent outcomes show that the suggested model is a valuable method for predicting the behavior of cyberbullying, its strength, and its impact on social networks via the Internet. In the end, we evaluated the results of the proposed and basic features with machine learning techniques, which shows us the importance and effectiveness of the proposed features for detecting cyberbullying. We evaluated the models, and we got the accuracy of the KNN (0,90), SVM (0,92), and Deep learning (0,96)","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134882328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141057
Shamsuddeen Adamu, Hitham Alhussian, Norshakirah Aziz, Said Jadid Abdulkadir, Ayed Alwadin, Abdullahi Abubakar Imam, Aliyu Garba, Yahaya Saidu
Melanoma, a prevalent and formidable skin cancer, necessitates early detection for improved survival rates. The rising incidence of melanoma poses significant challenges to healthcare systems worldwide. While deep neural networks offer the potential for precise melanoma classification, the optimization of hyperparameters remains a major obstacle. This paper introduces a groundbreaking approach that harnesses the Manta Rays Foraging Optimizer (MRFO) to empower melanoma classification. MRFO efficiently fine-tunes hyperparameters for a Convolutional Neural Network (CNN) using the ISIC 2019 dataset, which comprises 776 images (438 melanoma, 338 non-melanoma). The proposed cost-effective DenseNet121 model surpasses other optimization methods in various metrics during training, testing, and validation. It achieves an impressive accuracy of 99.26%, an AUC of 99.56%, an F1 score of 0.9091, a precision of 94.06%, and a recall of 87.96%. Comparative analysis with EfficientB1, EfficientB7, EfficientNetV2B0, NesNetLarge, ResNet50, VGG16, and VGG19 models demonstrates its superiority. These findings underscore the potential of the novel MRFO-based approach in achieving superior accuracy for melanoma classification. The proposed method has the potential to be a valuable tool for early detection and improved patient outcomes.
{"title":"Optimizing Hyperparameters for Improved Melanoma Classification using Metaheuristic Algorithm","authors":"Shamsuddeen Adamu, Hitham Alhussian, Norshakirah Aziz, Said Jadid Abdulkadir, Ayed Alwadin, Abdullahi Abubakar Imam, Aliyu Garba, Yahaya Saidu","doi":"10.14569/ijacsa.2023.0141057","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141057","url":null,"abstract":"Melanoma, a prevalent and formidable skin cancer, necessitates early detection for improved survival rates. The rising incidence of melanoma poses significant challenges to healthcare systems worldwide. While deep neural networks offer the potential for precise melanoma classification, the optimization of hyperparameters remains a major obstacle. This paper introduces a groundbreaking approach that harnesses the Manta Rays Foraging Optimizer (MRFO) to empower melanoma classification. MRFO efficiently fine-tunes hyperparameters for a Convolutional Neural Network (CNN) using the ISIC 2019 dataset, which comprises 776 images (438 melanoma, 338 non-melanoma). The proposed cost-effective DenseNet121 model surpasses other optimization methods in various metrics during training, testing, and validation. It achieves an impressive accuracy of 99.26%, an AUC of 99.56%, an F1 score of 0.9091, a precision of 94.06%, and a recall of 87.96%. Comparative analysis with EfficientB1, EfficientB7, EfficientNetV2B0, NesNetLarge, ResNet50, VGG16, and VGG19 models demonstrates its superiority. These findings underscore the potential of the novel MRFO-based approach in achieving superior accuracy for melanoma classification. The proposed method has the potential to be a valuable tool for early detection and improved patient outcomes.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141027
Hafiz Burhan Ul Haq, Watcharapan Suwansantisuk, Kosin Chamnongthai
Surveillance video is now able to play a vital role in maintaining security and protection thanks to the advancement of digital video technology. Businesses, both private and public, employ surveillance systems to monitor and track their daily operations. As a result, video generates a significant volume of data that needs to be further processed to satisfy security protocol requirements. Analyzing video requires a lot of effort and time, as well as quick equipment. The concept of a video summary was developed in order to overcome these limitations. To work past these limitations, the concept of video summarization has emerged. In this study, a deep learning-based method for customized video summarization is presented. This research enables users to produce a video summary in accordance with the User Object of Interest (UOoI), such as a car, airplane, person, bicycle, automobile, etc. Several experiments have been conducted on the two datasets, SumMe and self-created, to assess the efficiency of the proposed method. On SumMe and the self-created dataset, the overall accuracy is 98.7% and 97.5%, respectively, with a summarization rate of 93.5% and 67.3%. Furthermore, a comparison study is done to demonstrate that our proposed method is superior to other existing methods in terms of video summarization accuracy and robustness. Additionally, a graphic user interface is created to assist the user with summarizing the video using the UOoI.
由于数字视频技术的进步,监控视频现在能够在维护安全和保护方面发挥至关重要的作用。私营和公共企业都采用监视系统来监视和跟踪其日常运营。因此,视频会产生大量的数据,这些数据需要进一步处理才能满足安全协议的要求。分析视频需要大量的精力和时间,以及快速的设备。视频摘要的概念是为了克服这些限制而发展起来的。为了克服这些限制,视频摘要的概念出现了。在本研究中,提出了一种基于深度学习的自定义视频摘要方法。本研究使用户能够根据用户感兴趣的对象(User Object of Interest, UOoI),如汽车、飞机、人、自行车、汽车等,制作视频摘要。在SumMe和self-created两个数据集上进行了多次实验,以评估所提出方法的效率。在SumMe和自建数据集上,总体准确率分别为98.7%和97.5%,总结率为93.5%和67.3%。对比研究表明,本文提出的方法在视频摘要的准确性和鲁棒性方面都优于现有的方法。此外,还创建了图形用户界面,以帮助用户使用UOoI总结视频。
{"title":"An Optimized Deep Learning Method for Video Summarization Based on the User Object of Interest","authors":"Hafiz Burhan Ul Haq, Watcharapan Suwansantisuk, Kosin Chamnongthai","doi":"10.14569/ijacsa.2023.0141027","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141027","url":null,"abstract":"Surveillance video is now able to play a vital role in maintaining security and protection thanks to the advancement of digital video technology. Businesses, both private and public, employ surveillance systems to monitor and track their daily operations. As a result, video generates a significant volume of data that needs to be further processed to satisfy security protocol requirements. Analyzing video requires a lot of effort and time, as well as quick equipment. The concept of a video summary was developed in order to overcome these limitations. To work past these limitations, the concept of video summarization has emerged. In this study, a deep learning-based method for customized video summarization is presented. This research enables users to produce a video summary in accordance with the User Object of Interest (UOoI), such as a car, airplane, person, bicycle, automobile, etc. Several experiments have been conducted on the two datasets, SumMe and self-created, to assess the efficiency of the proposed method. On SumMe and the self-created dataset, the overall accuracy is 98.7% and 97.5%, respectively, with a summarization rate of 93.5% and 67.3%. Furthermore, a comparison study is done to demonstrate that our proposed method is superior to other existing methods in terms of video summarization accuracy and robustness. Additionally, a graphic user interface is created to assist the user with summarizing the video using the UOoI.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141001
Sarah Janböcke, Toshimi Ogawa, Johanna Langendorf, Koki Kobayashi, Ryan Browne, Rainer Wieching, Yasuyuki Taki
Project e-VITA is a joined research force from Europe and Japan that examines various cutting-edge e-health applications for older adult care. Those specific users do not necessarily feel technology savvy or secure enough to open up for innovative home tech systems. Thus, it is essential to provide the support that is virtual and human beside each other. Human coaches will provide this support to fulfill this role as a mediator between the technological system and the end-user. Reactance towards the system from the mediator's role could lead to the system's failure with the end user, thus failing the development. The effect of technology reactance in the integration process of a technological system can be the decisive factor in evaluating the success and failure of a technological system. We used part-standardized, problem-centered interviews to understand the human coaches’ challenges. The sample included people who act as the mediator role between the user and the technological system in the test application in the study centers. The interviews focused on experienced or imagined hurdles in the communication process with the user and the mediator role as well as the later relationship dynamic between the mediator, end-user, and technological system. The described technological challenges during the testing phase led the human coaches to responsibility, diffusion and uncertainty within their role. Furthermore, they led to a feeling of not fulfilling role expectations, which in the long term could indicate missing self-efficacy for the human coaches. We describe possible solutions mentioned by the interviewees and deepen the understanding of decisive factors for sustainable system integration for e-health applications.
{"title":"Human Coach Technology Reactance Factors and their Influence on End-Users' Acceptance of e-Health Applications","authors":"Sarah Janböcke, Toshimi Ogawa, Johanna Langendorf, Koki Kobayashi, Ryan Browne, Rainer Wieching, Yasuyuki Taki","doi":"10.14569/ijacsa.2023.0141001","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141001","url":null,"abstract":"Project e-VITA is a joined research force from Europe and Japan that examines various cutting-edge e-health applications for older adult care. Those specific users do not necessarily feel technology savvy or secure enough to open up for innovative home tech systems. Thus, it is essential to provide the support that is virtual and human beside each other. Human coaches will provide this support to fulfill this role as a mediator between the technological system and the end-user. Reactance towards the system from the mediator's role could lead to the system's failure with the end user, thus failing the development. The effect of technology reactance in the integration process of a technological system can be the decisive factor in evaluating the success and failure of a technological system. We used part-standardized, problem-centered interviews to understand the human coaches’ challenges. The sample included people who act as the mediator role between the user and the technological system in the test application in the study centers. The interviews focused on experienced or imagined hurdles in the communication process with the user and the mediator role as well as the later relationship dynamic between the mediator, end-user, and technological system. The described technological challenges during the testing phase led the human coaches to responsibility, diffusion and uncertainty within their role. Furthermore, they led to a feeling of not fulfilling role expectations, which in the long term could indicate missing self-efficacy for the human coaches. We describe possible solutions mentioned by the interviewees and deepen the understanding of decisive factors for sustainable system integration for e-health applications.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141056
Deshinta Arrova Dewi, Rajermani Thinakan, Malathy Batumalay, Tri Basuki Kurniawan
The user’s demands in the system supported by the Internet of Things are frequently controlled effectively using the pervasive computing system. Pervasive computing is a term used to describe a system that integrates several communication and distributed network technologies. Even so, it properly accommodates user needs. It is quite difficult to be inventive in the pervasive computing system when it comes to the delivery of information, handling standards, and extending heterogeneous aid for scattered clients. In this view, our paper intends to utilize a Dispersed and Elastic Computing Model (DECM) to enable proper and reliable communication for people who are using IoT-based wearable healthcare devices. Recurrent Reinforcement Learning (RRL) is used in the suggested model and the system that is connected to analyze resource allocation in response to requirements and other allocative factors. To provide effective data transmission over wearable medical devices, the built system gives managing mobility additional consideration to resource allocation and distribution. The results show that the pervasive computing system provides services to the user with reduced latency and an increased rate of communication for healthcare wearable devices based on the determined demands of the resources. This is an important aspect of sustainable healthcare. We employ the assessment metrics consisting of request failure, response time, managed and backlogged requests, bandwidth, and storage to capture the consistency of the proposed model.
{"title":"A Model for Pervasive Computing and Wearable Devices for Sustainable Healthcare Applications","authors":"Deshinta Arrova Dewi, Rajermani Thinakan, Malathy Batumalay, Tri Basuki Kurniawan","doi":"10.14569/ijacsa.2023.0141056","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141056","url":null,"abstract":"The user’s demands in the system supported by the Internet of Things are frequently controlled effectively using the pervasive computing system. Pervasive computing is a term used to describe a system that integrates several communication and distributed network technologies. Even so, it properly accommodates user needs. It is quite difficult to be inventive in the pervasive computing system when it comes to the delivery of information, handling standards, and extending heterogeneous aid for scattered clients. In this view, our paper intends to utilize a Dispersed and Elastic Computing Model (DECM) to enable proper and reliable communication for people who are using IoT-based wearable healthcare devices. Recurrent Reinforcement Learning (RRL) is used in the suggested model and the system that is connected to analyze resource allocation in response to requirements and other allocative factors. To provide effective data transmission over wearable medical devices, the built system gives managing mobility additional consideration to resource allocation and distribution. The results show that the pervasive computing system provides services to the user with reduced latency and an increased rate of communication for healthcare wearable devices based on the determined demands of the resources. This is an important aspect of sustainable healthcare. We employ the assessment metrics consisting of request failure, response time, managed and backlogged requests, bandwidth, and storage to capture the consistency of the proposed model.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141047
Khaled SH. Raslan, Almohammady S. Alsharkawy, K. R. Raslan
Classifying imbalanced datasets presents a significant challenge in the field of machine learning, especially with big data, where instances are unevenly distributed among classes, leading to class imbalance issues that affect classifier performance. Synthetic Minority Over-sampling Technique (SMOTE) is an effective oversampling method that addresses this by generating new instances for the under-represented minority class. However, SMOTE's efficiency relies on the sampling rate for minority class instances, making optimal sampling rates crucial for solving class imbalance. In this paper, we introduce HHO-SMOTe, a novel hybrid approach that combines the Harris Hawk optimization (HHO) search algorithm with SMOTE to enhance classification accuracy by determining optimal sample rates for each dataset. We conducted extensive experiments across diverse datasets to comprehensively evaluate our binary classification model. The results demonstrated our model's exceptional performance, with an AUC score exceeding 0.96, a high G-means score of 0.95 highlighting its robustness, and an outstanding F1-score consistently exceeding 0.99. These findings collectively establish our proposed approach as a formidable contender in the domain of binary classification models.
{"title":"HHO-SMOTe: Efficient Sampling Rate for Synthetic Minority Oversampling Technique Based on Harris Hawk Optimization","authors":"Khaled SH. Raslan, Almohammady S. Alsharkawy, K. R. Raslan","doi":"10.14569/ijacsa.2023.0141047","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141047","url":null,"abstract":"Classifying imbalanced datasets presents a significant challenge in the field of machine learning, especially with big data, where instances are unevenly distributed among classes, leading to class imbalance issues that affect classifier performance. Synthetic Minority Over-sampling Technique (SMOTE) is an effective oversampling method that addresses this by generating new instances for the under-represented minority class. However, SMOTE's efficiency relies on the sampling rate for minority class instances, making optimal sampling rates crucial for solving class imbalance. In this paper, we introduce HHO-SMOTe, a novel hybrid approach that combines the Harris Hawk optimization (HHO) search algorithm with SMOTE to enhance classification accuracy by determining optimal sample rates for each dataset. We conducted extensive experiments across diverse datasets to comprehensively evaluate our binary classification model. The results demonstrated our model's exceptional performance, with an AUC score exceeding 0.96, a high G-means score of 0.95 highlighting its robustness, and an outstanding F1-score consistently exceeding 0.99. These findings collectively establish our proposed approach as a formidable contender in the domain of binary classification models.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the growth of immersive 3D animation, its application in ink element animation is constantly updating and advancing. However, the current immersive 3D ink element animation production also has the problem of lack of innovation and repeated development, so the research innovatively designs and develops the image stitching method for immersive 3D ink element animation production. The method is designed through stereo matching algorithm and scale-invariant feature transform algorithm, and the stereo matching algorithm is optimized with the weighted median filtering method based on the guide map. In addition, the study also designs the specific implementation of this method from different functional modules. The experimental results show that on four different datasets, the error percentages of the optimized stereo matching algorithm in non-occluded areas are 0.3885%, 0.4743%, 1.6848%, and 1.34%, respectively. The error percentages of all areas are 0.8316%, 0.8253%, 4.3235%, and 4.1760%, respectively. The research and design of image stitching methods can be applied in other fields and has good practical significance.
{"title":"Image Stitching Method and Implementation for Immersive 3D Ink Element Animation Production","authors":"Chen Yang, Siti SalmiJamali, Adzira Husain, Nianyou Zhu","doi":"10.14569/ijacsa.2023.01410120","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01410120","url":null,"abstract":"As the growth of immersive 3D animation, its application in ink element animation is constantly updating and advancing. However, the current immersive 3D ink element animation production also has the problem of lack of innovation and repeated development, so the research innovatively designs and develops the image stitching method for immersive 3D ink element animation production. The method is designed through stereo matching algorithm and scale-invariant feature transform algorithm, and the stereo matching algorithm is optimized with the weighted median filtering method based on the guide map. In addition, the study also designs the specific implementation of this method from different functional modules. The experimental results show that on four different datasets, the error percentages of the optimized stereo matching algorithm in non-occluded areas are 0.3885%, 0.4743%, 1.6848%, and 1.34%, respectively. The error percentages of all areas are 0.8316%, 0.8253%, 4.3235%, and 4.1760%, respectively. The research and design of image stitching methods can be applied in other fields and has good practical significance.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the realm of seismology has witnessed an increased integration of advanced computational techniques, seeking to enhance the precision and timeliness of earthquake predictions. The paper titled "Deep Convolutional Neural Network and Machine Learning Enabled Framework for Analysis and Prediction of Seismic Events" embarks on an ambitious exploration of this interstice, marrying the formidable prowess of Deep Convolutional Neural Networks (CNNs) with an array of machine learning algorithms. At the forefront of our investigation is the Deep CNN, known for its unparalleled capability to process spatial hierarchies and multi-dimensional seismic data. Accompanying this neural behemoth is LightGBM, a gradient boosting framework that offers superior speed and performance, especially with voluminous datasets. Additionally, conventional neural networks, noted for their adeptness in pattern recognition, offer a robust method to gauge the intricacies of seismic data. Our exploration doesn't halt here; the research delves deeper with Random Forest and Support Vector Machines (SVM), both renowned for their resilient performance in classification tasks. By amalgamating these diverse methodologies, this research crafts a multifaceted and synergistic framework. The culmination is a sophisticated tool poised to not only discern the minutiae of seismic activities with heightened accuracy but to predict forthcoming events with a degree of certainty previously deemed elusive. In this era of escalating seismic activities, our research offers a timely beacon, heralding a future where communities are better equipped to respond to the Earth's capricious tremors.
{"title":"Deep Convolutional Neural Network for Accurate Prediction of Seismic Events","authors":"Assem Turarbek, Maktagali Bektemesov, Aliya Ongarbayeva, Assel Orazbayeva, Aizhan Koishybekova, Yeldos Adetbekov","doi":"10.14569/ijacsa.2023.0141064","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141064","url":null,"abstract":"In recent years, the realm of seismology has witnessed an increased integration of advanced computational techniques, seeking to enhance the precision and timeliness of earthquake predictions. The paper titled \"Deep Convolutional Neural Network and Machine Learning Enabled Framework for Analysis and Prediction of Seismic Events\" embarks on an ambitious exploration of this interstice, marrying the formidable prowess of Deep Convolutional Neural Networks (CNNs) with an array of machine learning algorithms. At the forefront of our investigation is the Deep CNN, known for its unparalleled capability to process spatial hierarchies and multi-dimensional seismic data. Accompanying this neural behemoth is LightGBM, a gradient boosting framework that offers superior speed and performance, especially with voluminous datasets. Additionally, conventional neural networks, noted for their adeptness in pattern recognition, offer a robust method to gauge the intricacies of seismic data. Our exploration doesn't halt here; the research delves deeper with Random Forest and Support Vector Machines (SVM), both renowned for their resilient performance in classification tasks. By amalgamating these diverse methodologies, this research crafts a multifaceted and synergistic framework. The culmination is a sophisticated tool poised to not only discern the minutiae of seismic activities with heightened accuracy but to predict forthcoming events with a degree of certainty previously deemed elusive. In this era of escalating seismic activities, our research offers a timely beacon, heralding a future where communities are better equipped to respond to the Earth's capricious tremors.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0141032
Mariam El Ghazi, Noura Aknin
Human Activity Recognition (HAR) holds significant implications across diverse domains, including healthcare, sports analytics, and human-computer interaction. Deep learning models demonstrate great potential in HAR, but performance is often hindered by imbalanced datasets. This study investigates the impact of class imbalance on deep learning models in HAR and conducts a comprehensive comparative analysis of various sampling techniques to mitigate this issue. The experimentation involves the PAMAP2 dataset, encompassing data collected from wearable sensors. The research includes four primary experiments. Initially, a performance baseline is established by training four deep-learning models on the imbalanced dataset. Subsequently, Synthetic Minority Over-sampling Technique (SMOTE), random under-sampling, and a hybrid sampling approach are employed to rebalance the dataset. In each experiment, Bayesian optimization is employed for hyperparameter tuning, optimizing model performance. The findings underscore the paramount importance of dataset balance, resulting in substantial improvements across critical performance metrics such as accuracy, F1 score, precision, and recall. Notably, the hybrid sampling technique, combining SMOTE and Random Undersampling, emerges as the most effective method, surpassing other approaches. This research contributes significantly to advancing the field of HAR, highlighting the necessity of addressing class imbalance in deep learning models. Furthermore, the results offer practical insights for the development of HAR systems, enhancing accuracy and reliability in real-world applications. Future works will explore alternative public datasets, more complex deep learning models, and diverse sampling techniques to further elevate the capabilities of HAR systems.
{"title":"A Comparison of Sampling Methods for Dealing with Imbalanced Wearable Sensor Data in Human Activity Recognition using Deep Learning","authors":"Mariam El Ghazi, Noura Aknin","doi":"10.14569/ijacsa.2023.0141032","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141032","url":null,"abstract":"Human Activity Recognition (HAR) holds significant implications across diverse domains, including healthcare, sports analytics, and human-computer interaction. Deep learning models demonstrate great potential in HAR, but performance is often hindered by imbalanced datasets. This study investigates the impact of class imbalance on deep learning models in HAR and conducts a comprehensive comparative analysis of various sampling techniques to mitigate this issue. The experimentation involves the PAMAP2 dataset, encompassing data collected from wearable sensors. The research includes four primary experiments. Initially, a performance baseline is established by training four deep-learning models on the imbalanced dataset. Subsequently, Synthetic Minority Over-sampling Technique (SMOTE), random under-sampling, and a hybrid sampling approach are employed to rebalance the dataset. In each experiment, Bayesian optimization is employed for hyperparameter tuning, optimizing model performance. The findings underscore the paramount importance of dataset balance, resulting in substantial improvements across critical performance metrics such as accuracy, F1 score, precision, and recall. Notably, the hybrid sampling technique, combining SMOTE and Random Undersampling, emerges as the most effective method, surpassing other approaches. This research contributes significantly to advancing the field of HAR, highlighting the necessity of addressing class imbalance in deep learning models. Furthermore, the results offer practical insights for the development of HAR systems, enhancing accuracy and reliability in real-world applications. Future works will explore alternative public datasets, more complex deep learning models, and diverse sampling techniques to further elevate the capabilities of HAR systems.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}