Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425309
T. Saba, Muhammad Kashif, Erum Afzal
Facial expression recognition in the wild is challenging due to various unconstrained conditions. Although existing facial expression classifiers have been almost perfect on analyzing constrained frontal faces, they fail to perform well on partially occluded faces common in the wild. In this paper, an improved facial expression recognition technique, patch-based multiple local binary pattern (LBP) descriptor, comprises three and four patch LBPs [TPLBP, FPLBP]. The two-dimensional discrete cosine transform (DCT) was applied over the entire coded TPLBP and FPLBP face image as a feature extractor. The experiment results show that the proposed technique achieves a better recognition rate than state-of-the-art techniques. Oulu-CASIA dataset facial expression images have been evaluated using a support vector machine (SVM) classifier resulted in an accuracy of 92.1%.
由于各种不受约束的条件,在野外进行面部表情识别具有挑战性。尽管现有的面部表情分类器在分析受约束的正面人脸方面几乎是完美的,但它们在分析自然环境中常见的部分遮挡人脸时却表现不佳。本文提出了一种改进的面部表情识别技术——基于patch的多局部二值模式描述符(multiple local binary pattern, LBP),包括3个和4个patch LBP [TPLBP, FPLBP]。将二维离散余弦变换(DCT)作为特征提取器应用于整个编码的TPLBP和FPLBP人脸图像。实验结果表明,该方法取得了较好的识别率。使用支持向量机(SVM)分类器对Oulu-CASIA数据集的面部表情图像进行了评估,准确率达到92.1%。
{"title":"Facial Expression Recognition Using Patch-Based LBPS in an Unconstrained Environment","authors":"T. Saba, Muhammad Kashif, Erum Afzal","doi":"10.1109/CAIDA51941.2021.9425309","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425309","url":null,"abstract":"Facial expression recognition in the wild is challenging due to various unconstrained conditions. Although existing facial expression classifiers have been almost perfect on analyzing constrained frontal faces, they fail to perform well on partially occluded faces common in the wild. In this paper, an improved facial expression recognition technique, patch-based multiple local binary pattern (LBP) descriptor, comprises three and four patch LBPs [TPLBP, FPLBP]. The two-dimensional discrete cosine transform (DCT) was applied over the entire coded TPLBP and FPLBP face image as a feature extractor. The experiment results show that the proposed technique achieves a better recognition rate than state-of-the-art techniques. Oulu-CASIA dataset facial expression images have been evaluated using a support vector machine (SVM) classifier resulted in an accuracy of 92.1%.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425201
Aghyad Albaghajati, Moataz A. Ahmed
Multithreaded and parallel software systems are notably difficult to test due to their nature of non-determinism. Researchers from the literature suggested formal modeling and model checking to verify such systems. However, manual construction of models and abstractions of such systems could be time consuming, tiresome, and error prone. Automated models extraction approaches are necessary. In this study, we propose an approach to automatically extract Colored Petri Nets model from source code. Moreover, we establish a set of mapping rules to translate control flow graphs to Colored Petri Nets.
{"title":"CPN.Net: An Automated Colored Petri Nets Model Extraction From .Net Based Source Code","authors":"Aghyad Albaghajati, Moataz A. Ahmed","doi":"10.1109/CAIDA51941.2021.9425201","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425201","url":null,"abstract":"Multithreaded and parallel software systems are notably difficult to test due to their nature of non-determinism. Researchers from the literature suggested formal modeling and model checking to verify such systems. However, manual construction of models and abstractions of such systems could be time consuming, tiresome, and error prone. Automated models extraction approaches are necessary. In this study, we propose an approach to automatically extract Colored Petri Nets model from source code. Moreover, we establish a set of mapping rules to translate control flow graphs to Colored Petri Nets.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425175
Nadir Kamel Benamara, M. Keche, Murisi Wellington, Zhou Munyaradzi
Security is a major concern in Electronic Payment (E-Payment) systems. Usually, these systems are protected against illegal users, so-called hackers, by different means, such as Personal identification numbers (PINs), passwords, cards, etc. However, these hackers may manage to bypass this protection by having recourse to different strategies. Many techniques have been proposed to counter hacking attempts; however, there are still situations where an illegal user may succeed to access the E-payment system easily by stealing from a legal user its payment card. The use of Artificial Intelligence methods for face authentication, like deep learning, has made facial biometry a highly developing and accurate technology, especially in the past decade. In this paper, we propose the joint use of deep learning-based facial biometry and RFID cards to reinforce the security of an E-Payment system. By doing so, we ensure that a user should be physically present carrying his RFID card to be able to access the E-Payment system. We have tested three deep learning-based face authentication models and validated them on MUCT and CASIA Face-V5 datasets, to choose the most suitable one for our proposed secured E-Payment system, obtaining top verification rates of 99.90% and 99.26%, respectively. Two versions of this system are proposed; in the first version, which is based on a Personnel Computer (PC) and a Raspberry card, face authentication is implemented in a PC and the control of the RFID reader is performed by a Raspberry Pi 3, whereas in the second version, which may be considered as an embedded system, all the job is accomplished by the Raspberry Pi.
安全性是电子支付(E-Payment)系统的主要关注点。通常,这些系统通过不同的方式来防止非法用户,即所谓的黑客,例如个人识别号码(pin)、密码、卡片等。然而,这些黑客可能会通过采取不同的策略来绕过这种保护。人们提出了许多技术来对抗黑客攻击;然而,仍有非法使用者可以透过盗取合法使用者的支付卡,轻易进入电子支付系统的情况。人工智能方法在人脸认证中的应用,如深度学习,使得面部生物识别技术成为一项高度发展和精确的技术,尤其是在过去的十年里。在本文中,我们建议联合使用基于深度学习的面部生物识别和RFID卡来加强电子支付系统的安全性。通过这样做,我们确保用户必须亲自携带RFID卡,以便能够访问电子支付系统。我们测试了三种基于深度学习的人脸认证模型,并在MUCT和CASIA face - v5数据集上对它们进行了验证,以选择最适合我们所提出的安全电子支付系统的模型,最高验证率分别为99.90%和99.26%。提出了该系统的两个版本;在第一个版本中,基于个人电脑(PC)和树莓卡,人脸认证在PC上实现,RFID读取器的控制由树莓派3完成,而在第二个版本中,可以认为是一个嵌入式系统,所有的工作都由树莓派完成。
{"title":"Securing E-payment Systems by RFID and Deep Facial Biometry","authors":"Nadir Kamel Benamara, M. Keche, Murisi Wellington, Zhou Munyaradzi","doi":"10.1109/CAIDA51941.2021.9425175","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425175","url":null,"abstract":"Security is a major concern in Electronic Payment (E-Payment) systems. Usually, these systems are protected against illegal users, so-called hackers, by different means, such as Personal identification numbers (PINs), passwords, cards, etc. However, these hackers may manage to bypass this protection by having recourse to different strategies. Many techniques have been proposed to counter hacking attempts; however, there are still situations where an illegal user may succeed to access the E-payment system easily by stealing from a legal user its payment card. The use of Artificial Intelligence methods for face authentication, like deep learning, has made facial biometry a highly developing and accurate technology, especially in the past decade. In this paper, we propose the joint use of deep learning-based facial biometry and RFID cards to reinforce the security of an E-Payment system. By doing so, we ensure that a user should be physically present carrying his RFID card to be able to access the E-Payment system. We have tested three deep learning-based face authentication models and validated them on MUCT and CASIA Face-V5 datasets, to choose the most suitable one for our proposed secured E-Payment system, obtaining top verification rates of 99.90% and 99.26%, respectively. Two versions of this system are proposed; in the first version, which is based on a Personnel Computer (PC) and a Raspberry card, face authentication is implemented in a PC and the control of the RFID reader is performed by a Raspberry Pi 3, whereas in the second version, which may be considered as an embedded system, all the job is accomplished by the Raspberry Pi.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125158523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425262
Anwar D. Alhejaili
In recent years, the development of technology, communication and network led to emergence Mobile computing concept and IoT. The mobile computing has been used in various areas such as online shopping, wearables devices, healthcare... etc. In the healthcare sector, to providing significant support the mobile computing expands IoT functionality in the healthcare environment to become mobile computing healthcare (M-health). In addition, due to the increase in population, there is an urgent need to meet healthcare requirements through mobile healthcare. In this paper, will discuss the mobile computing healthcare concept (M-healthcare), impacts of using mobile computing in IoT healthcare, types of available healthcare applications, common services and some issues. Also, will propose a secure framework for M-health.
{"title":"M-health Concept, Services and Issues","authors":"Anwar D. Alhejaili","doi":"10.1109/CAIDA51941.2021.9425262","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425262","url":null,"abstract":"In recent years, the development of technology, communication and network led to emergence Mobile computing concept and IoT. The mobile computing has been used in various areas such as online shopping, wearables devices, healthcare... etc. In the healthcare sector, to providing significant support the mobile computing expands IoT functionality in the healthcare environment to become mobile computing healthcare (M-health). In addition, due to the increase in population, there is an urgent need to meet healthcare requirements through mobile healthcare. In this paper, will discuss the mobile computing healthcare concept (M-healthcare), impacts of using mobile computing in IoT healthcare, types of available healthcare applications, common services and some issues. Also, will propose a secure framework for M-health.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127519398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425243
Hafiz Umer Draz, Muhammad Zeeshan Khan, M. U. Ghani Khan, A. Rehman, I. Abunadi
Driver distraction causes one of the major problems in road safety and accidents. According to the World Health Organization (WHO), over 285,000 estimated accidents happened as a result of distracted drivers per year. To address such a fatal issue and considering the future of Intelligent Transport System, we have proposed a novel ensemble learning approach based on deep learning techniques for detecting a distracted driver. In the proposed approach, we have fine-tuned the Faster-RCNN for detecting the objects involved in distracting the driver during driving and achieved 97.7% validation accuracy. Moreover, to make the prediction strong and reduced the false positive, pose points of the driver have also extracted. By using those pose points, we make sure that we detect only those objects which are directly associated with the driver’s distraction. The interactive association of various objects with the driver has calculated using the intersection over the union between the detected object and the current posture features of the driver. Our proposed ensemble learning technique has achieved over 92.2% accuracy which is far better than previously proposed models. The proposed method is not only time-efficient, robust, but cost-efficient as well. Such a model not only can ensure road safety as well as help Governments to save resources being spent on monetary losses.
{"title":"A Novel Ensemble Learning Approach of Deep Learning Techniques to Monitor Distracted Driver Behaviour in Real Time","authors":"Hafiz Umer Draz, Muhammad Zeeshan Khan, M. U. Ghani Khan, A. Rehman, I. Abunadi","doi":"10.1109/CAIDA51941.2021.9425243","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425243","url":null,"abstract":"Driver distraction causes one of the major problems in road safety and accidents. According to the World Health Organization (WHO), over 285,000 estimated accidents happened as a result of distracted drivers per year. To address such a fatal issue and considering the future of Intelligent Transport System, we have proposed a novel ensemble learning approach based on deep learning techniques for detecting a distracted driver. In the proposed approach, we have fine-tuned the Faster-RCNN for detecting the objects involved in distracting the driver during driving and achieved 97.7% validation accuracy. Moreover, to make the prediction strong and reduced the false positive, pose points of the driver have also extracted. By using those pose points, we make sure that we detect only those objects which are directly associated with the driver’s distraction. The interactive association of various objects with the driver has calculated using the intersection over the union between the detected object and the current posture features of the driver. Our proposed ensemble learning technique has achieved over 92.2% accuracy which is far better than previously proposed models. The proposed method is not only time-efficient, robust, but cost-efficient as well. Such a model not only can ensure road safety as well as help Governments to save resources being spent on monetary losses.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425061
R. Al-Jumeily, H. Kolivand, Shatha Ghareeb, J. Mustafina, M. Al-khafajiy, T. Baker
In 21st-century learning academic, where collaboration, digital literacy, critical thinking, and problem-solving are considered the core competencies to be enhanced and developed further.In a multinational country, such as the United Arab Emirates (UAE) which remains one of the fastest-growing countries on the planet in all domains including education, tourism and health there is a gap in matching students with different background. To accommodate different background, heritage and education systems for the incoming expats and their family and kids, there are different education systems (America, British, Local, Indian to name but a few) that are currently running in the UAE. However, the move from one system to another is not a straightforward process due to many reasons which can be categorized into three groups: admission stage, leveling stage, and class stage. The implementation of the proposed work will be integrated as a case study in one of the British school systems in Abu Dhabi, UAE. This paper considers the application of the robotics in Education providing the background for current use of the robots in the society.
{"title":"Robotics to Enhance the Teaching and Learning Process","authors":"R. Al-Jumeily, H. Kolivand, Shatha Ghareeb, J. Mustafina, M. Al-khafajiy, T. Baker","doi":"10.1109/CAIDA51941.2021.9425061","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425061","url":null,"abstract":"In 21st-century learning academic, where collaboration, digital literacy, critical thinking, and problem-solving are considered the core competencies to be enhanced and developed further.In a multinational country, such as the United Arab Emirates (UAE) which remains one of the fastest-growing countries on the planet in all domains including education, tourism and health there is a gap in matching students with different background. To accommodate different background, heritage and education systems for the incoming expats and their family and kids, there are different education systems (America, British, Local, Indian to name but a few) that are currently running in the UAE. However, the move from one system to another is not a straightforward process due to many reasons which can be categorized into three groups: admission stage, leveling stage, and class stage. The implementation of the proposed work will be integrated as a case study in one of the British school systems in Abu Dhabi, UAE. This paper considers the application of the robotics in Education providing the background for current use of the robots in the society.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133936854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425290
Hasibe Busra Dogru, Sahra Tilki, Akhtar Jamil, Alaa Ali Hameed
The rapid increment in internet usage has also resulted in bulk gerenation of text data . Therefore, investigation of new techniques for automatic classification of textual content is needed as manually managing unstructured text is challenging. The main objective of text classification is to train a model such that it should place an unseen text into correct category. In this study, text classification was performed using the Doc2vec word embedding method on the Turkish Text Classification 3600 (TTC-3600) dataset consisting of Turkish news texts and the BBC-News dataset consisting of English news texts. As the classification method, deep learning-based CNN and traditional machine learning classification methods Gauss Naive Bayes (GNB), Random Forest (RF), Naive Bayes (NB) and Support Vector Machine (SVM) are used. In the proposed model, the highest result was obtained as 94.17% in the Turkish dataset and 96.41% in the English dataset in the classification made with CNN.
{"title":"Deep Learning-Based Classification of News Texts Using Doc2Vec Model","authors":"Hasibe Busra Dogru, Sahra Tilki, Akhtar Jamil, Alaa Ali Hameed","doi":"10.1109/CAIDA51941.2021.9425290","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425290","url":null,"abstract":"The rapid increment in internet usage has also resulted in bulk gerenation of text data . Therefore, investigation of new techniques for automatic classification of textual content is needed as manually managing unstructured text is challenging. The main objective of text classification is to train a model such that it should place an unseen text into correct category. In this study, text classification was performed using the Doc2vec word embedding method on the Turkish Text Classification 3600 (TTC-3600) dataset consisting of Turkish news texts and the BBC-News dataset consisting of English news texts. As the classification method, deep learning-based CNN and traditional machine learning classification methods Gauss Naive Bayes (GNB), Random Forest (RF), Naive Bayes (NB) and Support Vector Machine (SVM) are used. In the proposed model, the highest result was obtained as 94.17% in the Turkish dataset and 96.41% in the English dataset in the classification made with CNN.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121303600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425108
Majd Khaled Almohsen, Renad khlief alonzi, Taif Hammad Alanazi, Shahad Nasser BinSaif, Maha Mohammed almujally
Delay in the arrival of emergency team after a road accident is one of the main reasons for the increase in the number of deaths in many counties across the globe. Saudi Arabia is not exception. This was the key motivation to undertake this project with an aim to contribute an IoT product that can reduce the number of deaths resulting from the problem of delaying the arrival of emergency team or ambulance. In this project we have designed a seat belt with sensor that senses the heart beat rate of the driver and send a notification to the ambulance about the driver’s location if he had an accident. To determine whether accident has occurred, the raw data from heart beat sensor is collected along with the data from vibration sensor of the car. Based on the value of these two collected data, UNO microcontroller is used to process and determine whether it is an accident. If accident the controller utilizes the GPS in the car to get the current location and sends notification to the pre-stored emergency contact numbers. The system will use GSM technology to send an alert containing the location of the accident. Also, we have added a fingerprint to confirm the identity of the passenger so that the sensor does not make mistakes when monitoring the heart rate if the driver changes. Here, we use Arduino platform to implement the hardware connections and for programming Arduino IDE is used. Thus, the end-product of our project is expected to reduce the percentage of deaths that may occur due to delaying ambulance.
{"title":"Smart Car Seat Belt: Accident Detection and Emergency Services in Smart City Environment","authors":"Majd Khaled Almohsen, Renad khlief alonzi, Taif Hammad Alanazi, Shahad Nasser BinSaif, Maha Mohammed almujally","doi":"10.1109/CAIDA51941.2021.9425108","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425108","url":null,"abstract":"Delay in the arrival of emergency team after a road accident is one of the main reasons for the increase in the number of deaths in many counties across the globe. Saudi Arabia is not exception. This was the key motivation to undertake this project with an aim to contribute an IoT product that can reduce the number of deaths resulting from the problem of delaying the arrival of emergency team or ambulance. In this project we have designed a seat belt with sensor that senses the heart beat rate of the driver and send a notification to the ambulance about the driver’s location if he had an accident. To determine whether accident has occurred, the raw data from heart beat sensor is collected along with the data from vibration sensor of the car. Based on the value of these two collected data, UNO microcontroller is used to process and determine whether it is an accident. If accident the controller utilizes the GPS in the car to get the current location and sends notification to the pre-stored emergency contact numbers. The system will use GSM technology to send an alert containing the location of the accident. Also, we have added a fingerprint to confirm the identity of the passenger so that the sensor does not make mistakes when monitoring the heart rate if the driver changes. Here, we use Arduino platform to implement the hardware connections and for programming Arduino IDE is used. Thus, the end-product of our project is expected to reduce the percentage of deaths that may occur due to delaying ambulance.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128585619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425258
Amani Moharram, Saud Altamimi, Riyad Alshammari
This study aims to develop an accurate machine learning model for predicting no-shows in pediatric outpatient clinics at King Faisal Specialist Hospital and Research Centre (KFSH&RC), and understand pediatric patients' characteristics who are most likely will not show to their scheduled appointments. Appointment no-show data collected from KFSH&RC data warehouse over the period (01 Jan – 31 Dec 2019). We analyzed a dataset that consists of 101,534 scheduled appointments for 35,290 pediatric patients. No-shows over the mentioned period was 11,573 for 8,105 patients. Three machine-learning algorithms, namely logistic regression, JRip, and Hoeffding tree, were compared to find the best one. The no-show rate in pediatric outpatient clinics was 11.39%. Accuracy, precision, recall, and F-score were selected to evaluate the built models performance. The precision and recall of the three models was around 90%. The F-score of the three models was similar and equal to 0.86. These models improved our capability to identify pediatric patients’ characteristics at high risk of not attending their appointments.
{"title":"Data Analytics and Predictive Modeling for Appointments No-show at a Tertiary Care Hospital","authors":"Amani Moharram, Saud Altamimi, Riyad Alshammari","doi":"10.1109/CAIDA51941.2021.9425258","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425258","url":null,"abstract":"This study aims to develop an accurate machine learning model for predicting no-shows in pediatric outpatient clinics at King Faisal Specialist Hospital and Research Centre (KFSH&RC), and understand pediatric patients' characteristics who are most likely will not show to their scheduled appointments. Appointment no-show data collected from KFSH&RC data warehouse over the period (01 Jan – 31 Dec 2019). We analyzed a dataset that consists of 101,534 scheduled appointments for 35,290 pediatric patients. No-shows over the mentioned period was 11,573 for 8,105 patients. Three machine-learning algorithms, namely logistic regression, JRip, and Hoeffding tree, were compared to find the best one. The no-show rate in pediatric outpatient clinics was 11.39%. Accuracy, precision, recall, and F-score were selected to evaluate the built models performance. The precision and recall of the three models was around 90%. The F-score of the three models was similar and equal to 0.86. These models improved our capability to identify pediatric patients’ characteristics at high risk of not attending their appointments.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"415 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126688971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425275
Md. Golam Mahabub Sarwar, Ashim Dey, Annesha Das
A large number of people around the world are suffering from visual impairment which is a global health issue. These visually challenged people face a great deal of difficulties in carrying out their day-to-day activities. Recognizing a person is one of the major problems faced by them. This document represents a face recognition system with auditory output which can be beneficial for visually challenged people in recognizing known and unknown persons. Proposed face recognition system is comprised of three main modules including dataset creation, dataset training, and face recognition. Here, Haar Cascade Classifier is used to detect face from a live video stream and then Local Binary Pattern Histogram (LBPH) algorithm is applied to create the recognizer for face recognition using OpenCV-Python library. This system can detect and recognize multiple people and is also capable of recognizing from both front and side face. The overall face recognition accuracy is about 93%. Apart from visually challenged people, old people with Alzheimer’s disease can also be benefited using this system.
{"title":"Developing a LBPH-based Face Recognition System for Visually Impaired People","authors":"Md. Golam Mahabub Sarwar, Ashim Dey, Annesha Das","doi":"10.1109/CAIDA51941.2021.9425275","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425275","url":null,"abstract":"A large number of people around the world are suffering from visual impairment which is a global health issue. These visually challenged people face a great deal of difficulties in carrying out their day-to-day activities. Recognizing a person is one of the major problems faced by them. This document represents a face recognition system with auditory output which can be beneficial for visually challenged people in recognizing known and unknown persons. Proposed face recognition system is comprised of three main modules including dataset creation, dataset training, and face recognition. Here, Haar Cascade Classifier is used to detect face from a live video stream and then Local Binary Pattern Histogram (LBPH) algorithm is applied to create the recognizer for face recognition using OpenCV-Python library. This system can detect and recognize multiple people and is also capable of recognizing from both front and side face. The overall face recognition accuracy is about 93%. Apart from visually challenged people, old people with Alzheimer’s disease can also be benefited using this system.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122095563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}