S. Akhter, Shah Jafor Sadeek Quaderi, Saleh Ud-Din Ahmed
Numerous studies are being undertaken to provide answers for sign language recognition and classification. Deep learning-based models have higher accuracy (90%-98%); however, require more runtime memory and processing in terms of both computational power and execution time (1 hour 20 minutes) for feature extraction and training images. Besides, deep learning models are not entirely insensitive to translation, rotation, and scaling; unless the training data includes rotated, translated, or scaled signs. However, Orientation-Based Hashcode (OBH) completes gesture recognition in a significantly shorter length of time (5 minutes) and with reasonable accuracy (80%-85%). In addition, OBH is not affected by translation, rotation, scaling, or occlusion. As a result, a new intermediary model is developed to detect sign language and perform classification with a reasonable processing time (6 minutes) like OBH while providing attractive accuracy (90%-96%) and invariance qualities. This paper presents a coupled and completely networked autonomous system comprised of OBH and Gabor features with machine learning models. The proposed model is evaluated with 576 sign alphabet images (RGB and Depth) from 24 distinct categories, and the results are compared to those obtained using traditional machine learning methodologies. The proposed methodology is 95.8% accurate against a randomly selected test dataset and 93.85% accurate after 9-fold validation.
{"title":"Deep Learning with OBH for Real-Time Rotation-Invariant Signs Detection","authors":"S. Akhter, Shah Jafor Sadeek Quaderi, Saleh Ud-Din Ahmed","doi":"10.1145/3587828.3587884","DOIUrl":"https://doi.org/10.1145/3587828.3587884","url":null,"abstract":"Numerous studies are being undertaken to provide answers for sign language recognition and classification. Deep learning-based models have higher accuracy (90%-98%); however, require more runtime memory and processing in terms of both computational power and execution time (1 hour 20 minutes) for feature extraction and training images. Besides, deep learning models are not entirely insensitive to translation, rotation, and scaling; unless the training data includes rotated, translated, or scaled signs. However, Orientation-Based Hashcode (OBH) completes gesture recognition in a significantly shorter length of time (5 minutes) and with reasonable accuracy (80%-85%). In addition, OBH is not affected by translation, rotation, scaling, or occlusion. As a result, a new intermediary model is developed to detect sign language and perform classification with a reasonable processing time (6 minutes) like OBH while providing attractive accuracy (90%-96%) and invariance qualities. This paper presents a coupled and completely networked autonomous system comprised of OBH and Gabor features with machine learning models. The proposed model is evaluated with 576 sign alphabet images (RGB and Depth) from 24 distinct categories, and the results are compared to those obtained using traditional machine learning methodologies. The proposed methodology is 95.8% accurate against a randomly selected test dataset and 93.85% accurate after 9-fold validation.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131388245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Hafenscherer, V. Mezhuyev, Martin Tschandl
Robotic process automation (RPA) technology currently gains importance in business practice as it enables the automation of business processes and thus a significant increase in business efficiency. This paper aims to investigate the impact of RPA on the specific business process of investment costing using a software robot for the projects of an energy supply company. For this purpose, an automated calculation tool was developed with Microsoft Power Apps and Microsoft Power Automate. The application of the proposed tool shows potential by significantly reducing the working time of employees for recalculating project investments based on different parameters. The developed RPA tool can be extended to calculate other types of standard investments. The results are aimed at managers pursuing the potential for business process automation.
机器人流程自动化(RPA)技术目前在业务实践中越来越重要,因为它可以实现业务流程的自动化,从而显著提高业务效率。本文旨在利用软件机器人研究RPA对某能源供应公司项目投资成本具体业务流程的影响。为此,使用Microsoft Power Apps和Microsoft Power automation开发了一个自动计算工具。所提出的工具的应用显示出潜力,它显著减少了员工根据不同参数重新计算项目投资的工作时间。开发的RPA工具可以扩展到计算其他类型的标准投资。结果针对的是追求业务流程自动化潜力的管理人员。
{"title":"Robotic process automation of calculating investments in a business project","authors":"Michael Hafenscherer, V. Mezhuyev, Martin Tschandl","doi":"10.1145/3587828.3587874","DOIUrl":"https://doi.org/10.1145/3587828.3587874","url":null,"abstract":"Robotic process automation (RPA) technology currently gains importance in business practice as it enables the automation of business processes and thus a significant increase in business efficiency. This paper aims to investigate the impact of RPA on the specific business process of investment costing using a software robot for the projects of an energy supply company. For this purpose, an automated calculation tool was developed with Microsoft Power Apps and Microsoft Power Automate. The application of the proposed tool shows potential by significantly reducing the working time of employees for recalculating project investments based on different parameters. The developed RPA tool can be extended to calculate other types of standard investments. The results are aimed at managers pursuing the potential for business process automation.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyberbullying detection in a sentence or utterance has challenges due to syntactic and meaning variations (lexical). Term Frequency-Inverse Document Frequency (TF-IDF) carries out textual feature extraction to produce candidates thematically based on word occurrence statistics. However, these candidates are generated without considering a term relationship between constituent elements in the parsing language syntax. This study discusses a TF-IDF feature extraction model using the n-Gram approach to produce candidate feature selection based on a specified term relationship. Thresholding applications for the formation of dynamic n-Gram segmentation were also discussed. Furthermore, the dynamic n-Gram model in TF-IDF feature extraction can be used in cyberbullying classification to overcome variations in syntax and meaning of sentences/speech from Bahasa Indonesia.
由于句法和意义的变化(词汇),网络欺凌在句子或话语中的检测存在挑战。Term Frequency- inverse Document Frequency (TF-IDF)是一种基于词频统计的文本特征提取方法。但是,在生成这些候选项时没有考虑解析语言语法中组成元素之间的术语关系。本研究讨论了一种TF-IDF特征提取模型,该模型使用n-Gram方法根据指定的术语关系产生候选特征选择。本文还讨论了阈值分割在动态n图分割中的应用。此外,TF-IDF特征提取中的动态n-Gram模型可用于网络欺凌分类,以克服印尼语句子/语音的句法和意义差异。
{"title":"The Use of Dynamic n-Gram to Enhance TF-IDF Features Extraction for Bahasa Indonesia Cyberbullying Classification","authors":"Yudi Setiawan, N. Maulidevi, K. Surendro","doi":"10.1145/3587828.3587858","DOIUrl":"https://doi.org/10.1145/3587828.3587858","url":null,"abstract":"Cyberbullying detection in a sentence or utterance has challenges due to syntactic and meaning variations (lexical). Term Frequency-Inverse Document Frequency (TF-IDF) carries out textual feature extraction to produce candidates thematically based on word occurrence statistics. However, these candidates are generated without considering a term relationship between constituent elements in the parsing language syntax. This study discusses a TF-IDF feature extraction model using the n-Gram approach to produce candidate feature selection based on a specified term relationship. Thresholding applications for the formation of dynamic n-Gram segmentation were also discussed. Furthermore, the dynamic n-Gram model in TF-IDF feature extraction can be used in cyberbullying classification to overcome variations in syntax and meaning of sentences/speech from Bahasa Indonesia.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114989665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Louise Marie Nirere, Kayalvizhi Jayavel, Alexander Ngenzi
Climate change is one of the most significant challenges to every country's development, ravaging havoc on the lives of all people on this planet. Researchers have raised numerous research and studies of strategies for tracking climate change. The current climate change tracking method in Rwanda employs a weather station model, in which numerous fixed weather stations are installed throughout the country; however, due to its immobility, this process cannot cover the entire country. With the lack of advanced methodologies and technology, the process of climate change tracking has become extremely expensive and suffered inaccuracies due to a lack of proper knowledge of analyzing collected data, and the lack of specific accurate hardware. Throughout this research, with the use of the MQ-135 and DHT11 sensors, ESP8266 collects carbon dioxide gas and temperature/humidity respectively and other component include a push button for detecting the current season. ESP8266 is programmed to send data over MQTT protocol, which uses Wi-Fi capability to send data to MQTT Broker. Using the MQTT protocol's Publish/Subscribe criteria, node-red subscribes to the topics defined in the MQTT broker to obtain data, which is then sent to MongoDB for permanent storage and also fed into the machine learning model for climate change/warming prediction. Different algorithms are used to evaluate this model. As result, Random Forest classifier approves itself to be the best model in evaluating the built model. This study shows that the increase in carbon dioxide gas leads to the gradual increase in the environmental temperature. Finally, the prediction clarifies that if no measures are taken presently, the climate change in Rwanda's Industrial zone will be dominated by warming periods in the future.
{"title":"IoT-BASED CLIMATE CHANGE PREDICTION SYSTEM","authors":"Louise Marie Nirere, Kayalvizhi Jayavel, Alexander Ngenzi","doi":"10.1145/3587828.3587862","DOIUrl":"https://doi.org/10.1145/3587828.3587862","url":null,"abstract":"Climate change is one of the most significant challenges to every country's development, ravaging havoc on the lives of all people on this planet. Researchers have raised numerous research and studies of strategies for tracking climate change. The current climate change tracking method in Rwanda employs a weather station model, in which numerous fixed weather stations are installed throughout the country; however, due to its immobility, this process cannot cover the entire country. With the lack of advanced methodologies and technology, the process of climate change tracking has become extremely expensive and suffered inaccuracies due to a lack of proper knowledge of analyzing collected data, and the lack of specific accurate hardware. Throughout this research, with the use of the MQ-135 and DHT11 sensors, ESP8266 collects carbon dioxide gas and temperature/humidity respectively and other component include a push button for detecting the current season. ESP8266 is programmed to send data over MQTT protocol, which uses Wi-Fi capability to send data to MQTT Broker. Using the MQTT protocol's Publish/Subscribe criteria, node-red subscribes to the topics defined in the MQTT broker to obtain data, which is then sent to MongoDB for permanent storage and also fed into the machine learning model for climate change/warming prediction. Different algorithms are used to evaluate this model. As result, Random Forest classifier approves itself to be the best model in evaluating the built model. This study shows that the increase in carbon dioxide gas leads to the gradual increase in the environmental temperature. Finally, the prediction clarifies that if no measures are taken presently, the climate change in Rwanda's Industrial zone will be dominated by warming periods in the future.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124517170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The difficulty of test questions is an important indicator for educational examination and recommendation of personalized learning resources. Its evaluation mainly depends on the experience of experts, which is subjective. In recent years, question difficulty prediction (QDP) using neural networks has attracted more and more attention. Although these methods improve the QDP efficiency, it works ill for questions involving abstract concepts, such as numerical calculation, date, and questions whose answers require background knowledge. Therefore, we propose a difficulty prediction model based on rich knowledge fusion (RKF+), which solves the problem that the difficulty prediction models cannot obtain conceptual knowledge and background knowledge. The key is to introduce the attentional mechanism with a sentry vector, which can dynamically obtain the text representation and external knowledge representation of test questions. To further fusion the acquired external knowledge, our model added a bi-interaction layer. Finally, the validity of this model is verified on three different datasets. Besides, the importance of attentional mechanism and external knowledge representation is further analyzed by ablation experiment. In addition, based on a real English reading comprehension test dataset, we explore the influence of two kinds of external knowledge on the question difficulty prediction model.
{"title":"Question Difficulty Prediction with External Knowledge","authors":"Jun He, J. Chen, Li Peng, Bo Sun, Huiying Zhang","doi":"10.1145/3587828.3587838","DOIUrl":"https://doi.org/10.1145/3587828.3587838","url":null,"abstract":"The difficulty of test questions is an important indicator for educational examination and recommendation of personalized learning resources. Its evaluation mainly depends on the experience of experts, which is subjective. In recent years, question difficulty prediction (QDP) using neural networks has attracted more and more attention. Although these methods improve the QDP efficiency, it works ill for questions involving abstract concepts, such as numerical calculation, date, and questions whose answers require background knowledge. Therefore, we propose a difficulty prediction model based on rich knowledge fusion (RKF+), which solves the problem that the difficulty prediction models cannot obtain conceptual knowledge and background knowledge. The key is to introduce the attentional mechanism with a sentry vector, which can dynamically obtain the text representation and external knowledge representation of test questions. To further fusion the acquired external knowledge, our model added a bi-interaction layer. Finally, the validity of this model is verified on three different datasets. Besides, the importance of attentional mechanism and external knowledge representation is further analyzed by ablation experiment. In addition, based on a real English reading comprehension test dataset, we explore the influence of two kinds of external knowledge on the question difficulty prediction model.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"2000 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Neural Networks (DNNs) have gained growing attention in many domain-specific supervised learning applications. However, the current DNNs still face two challenges. One is the difficulty of obtaining well-labeled training data for supervised learning and the other is concerned with the efficiency of training due to the lack of precise characteristics of the objects in the training process. We propose a framework of formal specification-based data generation for the training and testing of DNNs. The framework is characterized by using formal specifications to define the important and distinct features of the objects to be identified. The features are expected to serve as the foundation for generating training and testing data for DNNs. In this paper, we discuss all the activities involved in the framework and the detailed approach to writing the formal specifications. We also conduct a case study on traffic sign recognition to validate the framework.
{"title":"A Framework of Formal Specification-Based Data Generation for Deep Neural Networks","authors":"Yanzhao Xia, Shaoying Liu","doi":"10.1145/3587828.3587869","DOIUrl":"https://doi.org/10.1145/3587828.3587869","url":null,"abstract":"Deep Neural Networks (DNNs) have gained growing attention in many domain-specific supervised learning applications. However, the current DNNs still face two challenges. One is the difficulty of obtaining well-labeled training data for supervised learning and the other is concerned with the efficiency of training due to the lack of precise characteristics of the objects in the training process. We propose a framework of formal specification-based data generation for the training and testing of DNNs. The framework is characterized by using formal specifications to define the important and distinct features of the objects to be identified. The features are expected to serve as the foundation for generating training and testing data for DNNs. In this paper, we discuss all the activities involved in the framework and the detailed approach to writing the formal specifications. We also conduct a case study on traffic sign recognition to validate the framework.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130194311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ritha M. Umutoni, M. M. Ogore, Rosette L. Savanna, D. Hanyurwimfura, Jimmy Nsenga, Didacienne Mukanyirigira, Frederic Nzanywayingoma, Desire Ngabo, Joseph Habiyaremye
With the advent of artificial intelligence (AI) and Internet of Things (IoT), there has been a rapid increase in the use of sensors to intelligently monitor the environment and movement of objects. Smart solutions have been widely used for monitoring infectious diseases by limiting the transmission of contagious diseases using proximity sensing systems. This is an alternative to conventional social distancing technologies like Bluetooth and cameras which uses machine learning (ML), image processing to identify trespassers, and multiple object detection in real-time. This paper leverages the emerging Tiny ML technology to design and develop a wearable device that can prevent infectious diseases from spreading. The device senses the cough sound of the nearest person within a limited distance and then identify the nearest objects such as humans, animals (dog, goats), and wind-blown vegetation, based on patterns of PIR signals bounced back from different objects. By using machine learning algorithms, the device can be able to notify the user when they are in a safe environment or not. This solution is a wearable device that has the potential to be used in monitoring the transmission of contagious diseases by detecting and identifying moving objects and alerting people to keep their distance when they are in an unsafe environment with a high risk of being exposed to the disease. This work-focused research project will particularly focus on monitoring the risk environment to prevent infectious diseases between humans and between humans and animals, reminding users to keep their distance for their safety and the use of the Convolutional Neural Network (CNN) algorithm on the device for identifying moving objects and for detecting cough. The system has been evaluated, and the experiments have shown a performance accuracy of 92.1% for object detection and 68% for cough detection, promising for detecting a safe environment. This accuracy could be increased over time via reinforcement learning.
{"title":"Integration of TinyML-based proximity and couch sensing in wearable devices for monitoring infectious disease's social distance compliance","authors":"Ritha M. Umutoni, M. M. Ogore, Rosette L. Savanna, D. Hanyurwimfura, Jimmy Nsenga, Didacienne Mukanyirigira, Frederic Nzanywayingoma, Desire Ngabo, Joseph Habiyaremye","doi":"10.1145/3587828.3587880","DOIUrl":"https://doi.org/10.1145/3587828.3587880","url":null,"abstract":"With the advent of artificial intelligence (AI) and Internet of Things (IoT), there has been a rapid increase in the use of sensors to intelligently monitor the environment and movement of objects. Smart solutions have been widely used for monitoring infectious diseases by limiting the transmission of contagious diseases using proximity sensing systems. This is an alternative to conventional social distancing technologies like Bluetooth and cameras which uses machine learning (ML), image processing to identify trespassers, and multiple object detection in real-time. This paper leverages the emerging Tiny ML technology to design and develop a wearable device that can prevent infectious diseases from spreading. The device senses the cough sound of the nearest person within a limited distance and then identify the nearest objects such as humans, animals (dog, goats), and wind-blown vegetation, based on patterns of PIR signals bounced back from different objects. By using machine learning algorithms, the device can be able to notify the user when they are in a safe environment or not. This solution is a wearable device that has the potential to be used in monitoring the transmission of contagious diseases by detecting and identifying moving objects and alerting people to keep their distance when they are in an unsafe environment with a high risk of being exposed to the disease. This work-focused research project will particularly focus on monitoring the risk environment to prevent infectious diseases between humans and between humans and animals, reminding users to keep their distance for their safety and the use of the Convolutional Neural Network (CNN) algorithm on the device for identifying moving objects and for detecting cough. The system has been evaluated, and the experiments have shown a performance accuracy of 92.1% for object detection and 68% for cough detection, promising for detecting a safe environment. This accuracy could be increased over time via reinforcement learning.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121694987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of automated hate speech detection research has not yet detailed the actor who is the source of hatred, even though the forms of hate speech intervention for perpetrators and supporters are different. Previous research on this topic has paid much attention to the action component in the form of sentimental words. However, it has yet to pay attention to the actor component that can differentiate legal hate speech from illegal hate speech and transforms hateful or offensive speech into free speech, thus dealing with misclassification, as reported in a previous study. This research proposes a framework for the automated detection of hate speech that provides an actor and action component to solve the problem of such errors and meets the need for comprehensive interventions. This study shows how to apply the framework in a rule-based approach by considering the actor component in predicting hate speech and differentiating it from others. The prediction process is called actor-oriented automated hate speech detection.
{"title":"A Framework for Actor-Oriented Automated Hate Speech Detection","authors":"Rinda Cahyana, N. Maulidevi, K. Surendro","doi":"10.1145/3587828.3587870","DOIUrl":"https://doi.org/10.1145/3587828.3587870","url":null,"abstract":"The development of automated hate speech detection research has not yet detailed the actor who is the source of hatred, even though the forms of hate speech intervention for perpetrators and supporters are different. Previous research on this topic has paid much attention to the action component in the form of sentimental words. However, it has yet to pay attention to the actor component that can differentiate legal hate speech from illegal hate speech and transforms hateful or offensive speech into free speech, thus dealing with misclassification, as reported in a previous study. This research proposes a framework for the automated detection of hate speech that provides an actor and action component to solve the problem of such errors and meets the need for comprehensive interventions. This study shows how to apply the framework in a rule-based approach by considering the actor component in predicting hate speech and differentiating it from others. The prediction process is called actor-oriented automated hate speech detection.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132639917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinze Zhang, W. Brahim, Mingyang Fan, Jianhua Ma, Muxin Ma, Alex Qi
This paper aims to assess the ability of Frequency Modulated Continuous Wave (FMCW) radar to detect blinks not only in direct facing and limited-range situations but also by exploring different factors that may affect the accuracy of the detection process such as distance, angle, and movement. To achieve this, we propose a three-layered processing chain that relies on the use of Adaptive Variational Mode Decomposition (AVMD) algorithm to extract the blink signal as it can correctly separate it from complex radar reflections. We use an off-the-shelf FMCW radar operating at 77 GHz from Texas Instruments (TI) to perform several experiments and evaluate the feasibility of blink detection in each scenario, including distances up to 1.2 meters, different angles, while chewing gum, and across different subjects. The evaluation results show that FMCW radar combined with our processing chain can detect eyeblinks correctly under different conditions and farther distances than previous works.
{"title":"Radar-Based Eyeblink Detection Under Various Conditions","authors":"Xinze Zhang, W. Brahim, Mingyang Fan, Jianhua Ma, Muxin Ma, Alex Qi","doi":"10.1145/3587828.3587855","DOIUrl":"https://doi.org/10.1145/3587828.3587855","url":null,"abstract":"This paper aims to assess the ability of Frequency Modulated Continuous Wave (FMCW) radar to detect blinks not only in direct facing and limited-range situations but also by exploring different factors that may affect the accuracy of the detection process such as distance, angle, and movement. To achieve this, we propose a three-layered processing chain that relies on the use of Adaptive Variational Mode Decomposition (AVMD) algorithm to extract the blink signal as it can correctly separate it from complex radar reflections. We use an off-the-shelf FMCW radar operating at 77 GHz from Texas Instruments (TI) to perform several experiments and evaluate the feasibility of blink detection in each scenario, including distances up to 1.2 meters, different angles, while chewing gum, and across different subjects. The evaluation results show that FMCW radar combined with our processing chain can detect eyeblinks correctly under different conditions and farther distances than previous works.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130277639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 virus has rapidly spread throughout the world, and the WHO declared it a pandemic on March 11, 2020. Previous research considered five domains associated with the social vulnerability index in the context of pandemic infection management and mitigation in the community, such as socioeconomic conditions, demographic composition, housing and hygiene, availability of health care facilities, and epidemiological factors related to COVID-19. The Katadata Insight Center (KIC) investigates the vulnerability index of Indonesian provinces to the coronavirus based on the risks of regional characteristics, population health, and mobility. There is a chance that the supporting data is either incomplete or missing, which is a common flaw that influences the prediction system's results and renders it ineffective. This paper will compare the kNN-based imputation method with the mean imputation to handle missing data, which causes the provincial vulnerability index in Indonesia to be measured incorrectly. The vulnerability index associated with COVID-19 should be one of the factors considered by the Indonesian government when making decisions or establishing a lockdown strategy and large-scale restriction rules in each province. When missing data is discovered, kNN imputation and mean imputation can be used as a solution. Based on the results of the experiments, the mean imputation has a much lower average RMSE performance than the kNN imputation method in the dataset of vulnerability index in dealing with COVID-19 in Indonesia.
新冠肺炎病毒在全球迅速传播,世界卫生组织于2020年3月11日宣布其为大流行。之前的研究考虑了与社区大流行感染管理和缓解背景下的社会脆弱性指数相关的五个领域,如社会经济条件、人口构成、住房和卫生、医疗设施的可用性以及与COVID-19相关的流行病学因素。Katadata Insight Center (KIC)根据地区特征、人口健康和流动性的风险,调查了印度尼西亚各省对冠状病毒的脆弱性指数。支持数据有可能不完整或缺失,这是影响预测系统结果并使其无效的常见缺陷。本文将基于knn的方法与平均方法进行比较,以处理导致印度尼西亚省级脆弱性指数测量不正确的缺失数据。与新冠病毒相关的脆弱性指数应该成为印尼政府在制定封锁战略和各省大规模限制措施时考虑的因素之一。当发现缺失数据时,可以采用kNN imputation和mean imputation作为解决方案。实验结果表明,在印度尼西亚应对COVID-19脆弱性指数数据集中,平均插补方法的平均RMSE性能明显低于kNN插补方法。
{"title":"kNN Imputation Versus Mean Imputation for Handling Missing Data on Vulnerability Index in Dealing with Covid-19 in Indonesia","authors":"Heru Nugroho, N. P. Utama, K. Surendro","doi":"10.1145/3587828.3587832","DOIUrl":"https://doi.org/10.1145/3587828.3587832","url":null,"abstract":"The COVID-19 virus has rapidly spread throughout the world, and the WHO declared it a pandemic on March 11, 2020. Previous research considered five domains associated with the social vulnerability index in the context of pandemic infection management and mitigation in the community, such as socioeconomic conditions, demographic composition, housing and hygiene, availability of health care facilities, and epidemiological factors related to COVID-19. The Katadata Insight Center (KIC) investigates the vulnerability index of Indonesian provinces to the coronavirus based on the risks of regional characteristics, population health, and mobility. There is a chance that the supporting data is either incomplete or missing, which is a common flaw that influences the prediction system's results and renders it ineffective. This paper will compare the kNN-based imputation method with the mean imputation to handle missing data, which causes the provincial vulnerability index in Indonesia to be measured incorrectly. The vulnerability index associated with COVID-19 should be one of the factors considered by the Indonesian government when making decisions or establishing a lockdown strategy and large-scale restriction rules in each province. When missing data is discovered, kNN imputation and mean imputation can be used as a solution. Based on the results of the experiments, the mean imputation has a much lower average RMSE performance than the kNN imputation method in the dataset of vulnerability index in dealing with COVID-19 in Indonesia.","PeriodicalId":340917,"journal":{"name":"Proceedings of the 2023 12th International Conference on Software and Computer Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127260360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}