Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-85
Elizabeth Stevens, Abigail Atchison, Laura Stevens, Esther Hong, D. Granpeesheh, Dennis R. Dixon, Erik J. Linstead
We apply cluster analysis to a sample of 2,116 children with Autism Spectrum Disorder in order to identify patterns of challenging behaviors observed in home and centerbased clinical settings. The largest study of this type to date, and the first to employ machine learning, our results indicate that while the presence of multiple challenging behaviors is common, in most cases a dominant behavior emerges. Furthermore, the trend is also observed when we train our cluster models on the male and female samples separately. This work provides a basis for future studies to understand the relationship of challenging behavior profiles to learning outcomes, with the ultimate goal of providing personalized therapeutic interventions with maximum efficacy and minimum time and cost.
{"title":"A Cluster Analysis of Challenging Behaviors in Autism Spectrum Disorder","authors":"Elizabeth Stevens, Abigail Atchison, Laura Stevens, Esther Hong, D. Granpeesheh, Dennis R. Dixon, Erik J. Linstead","doi":"10.1109/ICMLA.2017.00-85","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-85","url":null,"abstract":"We apply cluster analysis to a sample of 2,116 children with Autism Spectrum Disorder in order to identify patterns of challenging behaviors observed in home and centerbased clinical settings. The largest study of this type to date, and the first to employ machine learning, our results indicate that while the presence of multiple challenging behaviors is common, in most cases a dominant behavior emerges. Furthermore, the trend is also observed when we train our cluster models on the male and female samples separately. This work provides a basis for future studies to understand the relationship of challenging behavior profiles to learning outcomes, with the ultimate goal of providing personalized therapeutic interventions with maximum efficacy and minimum time and cost.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"26 1","pages":"661-666"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91228224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.0-103
Muhammad Ajmal, M. Naseer, Farooq Ahmad, Asma Saleem
Multimedia technology is growing day by day and contributing towards enormous amount of video data especially in the area of security surveillance. The browsing through such a large collection of videos is a challenging and time-consuming task. Despite the advancement in technology automatic browsing, retrieval, manipulation and analysis of large videos are still far behind. In this paper a fully automatic human-centric system for video summarization is proposed. In most of the surveillance applications, human motion is of great interest. In proposed system the moving parts in the video are detected using background subtraction, and blobs are extracted from the binary image. Human detection is done through Histogram of Oriented Gradient (HOG) using Support Vector Machine (SVM) classifier. Then, motion of humans is tracked through consecutive frames using Kalman filter, and trajectory of each person is extracted. The analysis of trajectory leads to a meaningful summary which covers only important parts of video. One can also mark region of interest to be included in the summary. Experimental results show the proposed system reduces long video into meaningful summary and saves a lot of time and cost in terms of storage, indexing and browsing effort.
{"title":"Human Motion Trajectory Analysis Based Video Summarization","authors":"Muhammad Ajmal, M. Naseer, Farooq Ahmad, Asma Saleem","doi":"10.1109/ICMLA.2017.0-103","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.0-103","url":null,"abstract":"Multimedia technology is growing day by day and contributing towards enormous amount of video data especially in the area of security surveillance. The browsing through such a large collection of videos is a challenging and time-consuming task. Despite the advancement in technology automatic browsing, retrieval, manipulation and analysis of large videos are still far behind. In this paper a fully automatic human-centric system for video summarization is proposed. In most of the surveillance applications, human motion is of great interest. In proposed system the moving parts in the video are detected using background subtraction, and blobs are extracted from the binary image. Human detection is done through Histogram of Oriented Gradient (HOG) using Support Vector Machine (SVM) classifier. Then, motion of humans is tracked through consecutive frames using Kalman filter, and trajectory of each person is extracted. The analysis of trajectory leads to a meaningful summary which covers only important parts of video. One can also mark region of interest to be included in the summary. Experimental results show the proposed system reduces long video into meaningful summary and saves a lot of time and cost in terms of storage, indexing and browsing effort.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"11 1","pages":"550-555"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87563256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-58
Shubham Khunteta, Ashok Kumar Reddy Chavva
Link failure is a cause of a major concern for network operators in enhancing user experience in present system and upcoming 5G systems as well. There are many factors which can cause link failures, for example Handover (HO) failures, poor coverage and congested cells. Network operators are constantly improving their coverage qualities to overcome these issues. However reducing the link failures needs further improvements for the present and next generation (5G) systems. In this paper, we study applicability of Machine Learning (ML) algorithms to reduce link failure at handover. In the method proposed, Signal conditions (RSRP/RSRQ) are continuously observed and tracked using Deep Neural Networks such as Recurrent Neural Network (RNN) or Long Short Term Memory network (LSTM) and thus behavior of these signal conditions are taken as inputs to another neural network which acts as a classifier classifying event in either HO fail or success in advance. This advance in decision allows UE to take action to mitigate the possible link failure. Algorithms and model proposed in this paper are first of its kind connecting the link between past signal conditions and future HO result. We show the performance of the proposed algorithms for both system simulated and field log data. Given the need for more proactive role of UE in most of the link level decision in 5G systems, algorithms proposed in this paper are more relevant.
{"title":"Deep Learning Based Link Failure Mitigation","authors":"Shubham Khunteta, Ashok Kumar Reddy Chavva","doi":"10.1109/ICMLA.2017.00-58","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-58","url":null,"abstract":"Link failure is a cause of a major concern for network operators in enhancing user experience in present system and upcoming 5G systems as well. There are many factors which can cause link failures, for example Handover (HO) failures, poor coverage and congested cells. Network operators are constantly improving their coverage qualities to overcome these issues. However reducing the link failures needs further improvements for the present and next generation (5G) systems. In this paper, we study applicability of Machine Learning (ML) algorithms to reduce link failure at handover. In the method proposed, Signal conditions (RSRP/RSRQ) are continuously observed and tracked using Deep Neural Networks such as Recurrent Neural Network (RNN) or Long Short Term Memory network (LSTM) and thus behavior of these signal conditions are taken as inputs to another neural network which acts as a classifier classifying event in either HO fail or success in advance. This advance in decision allows UE to take action to mitigate the possible link failure. Algorithms and model proposed in this paper are first of its kind connecting the link between past signal conditions and future HO result. We show the performance of the proposed algorithms for both system simulated and field log data. Given the need for more proactive role of UE in most of the link level decision in 5G systems, algorithms proposed in this paper are more relevant.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"188 1","pages":"806-811"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79407028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-76
Neil J. Joshi, Seth D. Billings, Erika Schwartz, S. Harvey, P. Burlina
This study addresses the development of machine learning methods for reduced space ultrasound to perform automated prescreening of breast cancer. The use of ultrasound in low-resource settings is constrained by lack of trained personnel and equipment costs, and motivates the need for automated, low-cost diagnostic tools. We hypothesize a solution to this problem is the use of 1D ultrasound (single piezoelectric element). We leverage random forest classifiers to classify 1D samples of various types of tissue phantoms simulating cancerous, benign lesions, and non-cancerous tissues. In addition, we investigate the optimal ultrasound power and frequency parameters to maximize performance. We show preliminary results on 2-, 3- and 5-class classification problems for the ideal power/frequency combination. These results demonstrate promise towards the use of a single-element ultrasound device to screen for breast cancer.
{"title":"Machine Learning Methods for 1D Ultrasound Breast Cancer Screening","authors":"Neil J. Joshi, Seth D. Billings, Erika Schwartz, S. Harvey, P. Burlina","doi":"10.1109/ICMLA.2017.00-76","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-76","url":null,"abstract":"This study addresses the development of machine learning methods for reduced space ultrasound to perform automated prescreening of breast cancer. The use of ultrasound in low-resource settings is constrained by lack of trained personnel and equipment costs, and motivates the need for automated, low-cost diagnostic tools. We hypothesize a solution to this problem is the use of 1D ultrasound (single piezoelectric element). We leverage random forest classifiers to classify 1D samples of various types of tissue phantoms simulating cancerous, benign lesions, and non-cancerous tissues. In addition, we investigate the optimal ultrasound power and frequency parameters to maximize performance. We show preliminary results on 2-, 3- and 5-class classification problems for the ideal power/frequency combination. These results demonstrate promise towards the use of a single-element ultrasound device to screen for breast cancer.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"19 1","pages":"711-715"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82189961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00013
Chinmaya R. Naguri, Razvan C. Bunescu
Hand gestures provide a natural, non-verbal form of communication that can augment or replace other communication modalities such as speech or writing. Along with voice commands, hand gestures are becoming the primary means of interaction in games, augmented reality, and virtual reality platforms. Recognition accuracy, flexibility, and computational cost are some of the primary factors that can impact the incorporation of hand gestures in these new technologies, as well as their subsequent retrieval from multimodal corpora. In this paper, we present fast and highly accurate gesture recognition systems based on long short-term memory (LSTM) and convolutional neural networks (CNN) that are trained to process input sequences of 3D hand positions and velocities acquired from infrared sensors. When evaluated on real time recognition of six types of hand gestures, the proposed architectures obtain 97% F-measure, demonstrating a significant potential for practical applications in novel human-computer interfaces.
{"title":"Recognition of Dynamic Hand Gestures from 3D Motion Data Using LSTM and CNN Architectures","authors":"Chinmaya R. Naguri, Razvan C. Bunescu","doi":"10.1109/ICMLA.2017.00013","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00013","url":null,"abstract":"Hand gestures provide a natural, non-verbal form of communication that can augment or replace other communication modalities such as speech or writing. Along with voice commands, hand gestures are becoming the primary means of interaction in games, augmented reality, and virtual reality platforms. Recognition accuracy, flexibility, and computational cost are some of the primary factors that can impact the incorporation of hand gestures in these new technologies, as well as their subsequent retrieval from multimodal corpora. In this paper, we present fast and highly accurate gesture recognition systems based on long short-term memory (LSTM) and convolutional neural networks (CNN) that are trained to process input sequences of 3D hand positions and velocities acquired from infrared sensors. When evaluated on real time recognition of six types of hand gestures, the proposed architectures obtain 97% F-measure, demonstrating a significant potential for practical applications in novel human-computer interfaces.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"62 1","pages":"1130-1133"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81458681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-20
Nassara Elhadji-Ille-Gado, E. Grall-Maës, M. Kharouf
A major assumption in many machine learning algorithms is that the training and testing data must come from the same feature space or have the same distributions. However, in real applications, this strong hypothesis does not hold. In this paper, we introduce a new framework for transfer where the source and target domains are represented by subspaces described by eigenvector matrices. To unify subspace distribution between domains, we propose to use a fast efficient approximative SVD for fast features generation. In order to make a transfer learning between domains, we firstly use a subspace learning approach to develop a domain adaption algorithm where only target knowledge is transferable. Secondly, we use subspace alignment trick to propose a novel transfer domain adaptation method. To evaluate the proposal, we use large-scale data sets. Numerical results, based on accuracy and computational time are provided with comparison with state-of-the-art methods.
{"title":"Transfer Learning for Large Scale Data Using Subspace Alignment","authors":"Nassara Elhadji-Ille-Gado, E. Grall-Maës, M. Kharouf","doi":"10.1109/ICMLA.2017.00-20","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-20","url":null,"abstract":"A major assumption in many machine learning algorithms is that the training and testing data must come from the same feature space or have the same distributions. However, in real applications, this strong hypothesis does not hold. In this paper, we introduce a new framework for transfer where the source and target domains are represented by subspaces described by eigenvector matrices. To unify subspace distribution between domains, we propose to use a fast efficient approximative SVD for fast features generation. In order to make a transfer learning between domains, we firstly use a subspace learning approach to develop a domain adaption algorithm where only target knowledge is transferable. Secondly, we use subspace alignment trick to propose a novel transfer domain adaptation method. To evaluate the proposal, we use large-scale data sets. Numerical results, based on accuracy and computational time are provided with comparison with state-of-the-art methods.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"28 1","pages":"1006-1010"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78951565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-28
A. Pavel, V. Palade, R. Iqbal, Diana Hintea
Using short URLs in Twitter messages has increased in popularity in the past few years. This is mostly due to the fact that Twitter, as one of the most popular social media networks, imposes a 140 character limit to the messages distributed over the network. This paper analyzes the use of short URLs by Twitter users. Specifically, the goal is to examine the content pointed by the short URLs as well as the potential impact on the performance of sentiment analysis (opinion mining) tasks. Opinion mining based on Twitter feed has been used in an array of applications, including healthcare, identifying public opinion on political issues, financial modeling and advertising. Past research has however completely disregarded tweets which contain URLs. It is not hard to see how opinion mining can be improved considering the fact that Twitter users regularly post URLs pointing to articles endorsing a particular political figure, articles in important financial outlets or reviews of products. This study is based on the analysis of three distinct Twitter datasets with varying number of tweets which include short URLs. Popular machine learning techniques used in opinion mining were deployed in different experimental settings to conclude which are the most lucrative options.
{"title":"Using Short URLs in Tweets to Improve Twitter Opinion Mining","authors":"A. Pavel, V. Palade, R. Iqbal, Diana Hintea","doi":"10.1109/ICMLA.2017.00-28","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-28","url":null,"abstract":"Using short URLs in Twitter messages has increased in popularity in the past few years. This is mostly due to the fact that Twitter, as one of the most popular social media networks, imposes a 140 character limit to the messages distributed over the network. This paper analyzes the use of short URLs by Twitter users. Specifically, the goal is to examine the content pointed by the short URLs as well as the potential impact on the performance of sentiment analysis (opinion mining) tasks. Opinion mining based on Twitter feed has been used in an array of applications, including healthcare, identifying public opinion on political issues, financial modeling and advertising. Past research has however completely disregarded tweets which contain URLs. It is not hard to see how opinion mining can be improved considering the fact that Twitter users regularly post URLs pointing to articles endorsing a particular political figure, articles in important financial outlets or reviews of products. This study is based on the analysis of three distinct Twitter datasets with varying number of tweets which include short URLs. Popular machine learning techniques used in opinion mining were deployed in different experimental settings to conclude which are the most lucrative options.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"3 1","pages":"965-970"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86158905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.0-177
R. Razavi-Far, Ehsan Hallaji, M. Saif, L. Rueda
Machine learning techniques are widely used for diagnosing faults to guarantee the safe and reliable operation of the systems. Among various techniques, semi-supervised learning can help in diagnosing faulty states and decision making in partially labeled data, where only a few number of labeled observations along with a large number of unlabeled observations are collected from the process. Thus, it is crucial to conduct a critical study on the use of semi-supervised techniques for both dimensionality reduction and fault classification. In this work, three state-of-the- art semi-supervised dimensionality reduction techniques are used to produce informative features for semi-supervised fault classifiers. This study aims to achieve the best pair of the semisupervised dimensionality reduction and classification techniques that can be integrated into the diagnostic scheme for decision making under partially labeled sets of observations.
{"title":"A Hybrid Scheme for Fault Diagnosis with Partially Labeled Sets of Observations","authors":"R. Razavi-Far, Ehsan Hallaji, M. Saif, L. Rueda","doi":"10.1109/ICMLA.2017.0-177","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.0-177","url":null,"abstract":"Machine learning techniques are widely used for diagnosing faults to guarantee the safe and reliable operation of the systems. Among various techniques, semi-supervised learning can help in diagnosing faulty states and decision making in partially labeled data, where only a few number of labeled observations along with a large number of unlabeled observations are collected from the process. Thus, it is crucial to conduct a critical study on the use of semi-supervised techniques for both dimensionality reduction and fault classification. In this work, three state-of-the- art semi-supervised dimensionality reduction techniques are used to produce informative features for semi-supervised fault classifiers. This study aims to achieve the best pair of the semisupervised dimensionality reduction and classification techniques that can be integrated into the diagnostic scheme for decision making under partially labeled sets of observations.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"79 1","pages":"61-67"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83036508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-94
T. Oladunni, Sharad Sharma, Raymond Tiwang
This work focuses on an algorithmic investigation of the housing market spanning 11 years using the hedonic pricing theory. An improved pricing model will benefit home buyers and sellers, real estate agents and appraisers, government and mortgage lenders. Hedonic pricing theory is an econometric concept that explains the market value of a differentiated commodity using implicit pricing. Exploiting the spatial dependent nature of the housing market, we created new submarkets. A model was built with the new submarket, while another one was built using the existing submarket. Random forest and LASSO were trained with the two models. We argue that our approach has a considerable impact on the dimension of a spatio–temporal hedonic house pricing model without a significant reduction in its performance.
{"title":"A Spatio - Temporal Hedonic House Regression Model","authors":"T. Oladunni, Sharad Sharma, Raymond Tiwang","doi":"10.1109/ICMLA.2017.00-94","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-94","url":null,"abstract":"This work focuses on an algorithmic investigation of the housing market spanning 11 years using the hedonic pricing theory. An improved pricing model will benefit home buyers and sellers, real estate agents and appraisers, government and mortgage lenders. Hedonic pricing theory is an econometric concept that explains the market value of a differentiated commodity using implicit pricing. Exploiting the spatial dependent nature of the housing market, we created new submarkets. A model was built with the new submarket, while another one was built using the existing submarket. Random forest and LASSO were trained with the two models. We argue that our approach has a considerable impact on the dimension of a spatio–temporal hedonic house pricing model without a significant reduction in its performance.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1022 1","pages":"607-612"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88308296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICMLA.2017.00-51
Ricardo Silva Carvalho, Rommel N. Carvalho, G. N. Ramos, R. Mourão
This study proposes a predictive model to detect the delay in bank teller queues. Since there are penalties and fines applied to the branches that leave their clients waiting for a long time, detecting these cases as early as possible is essential. Four models were tested: one using a Queuing Theory's formula and the other three using Data Mining algorithms -- Deep Learning (DL), Gradient Boost Machine (GBM), and Random Forest (RF). The results indicated the GBM model as the most efficient, with an accuracy of 97% and a F1-measure of 75%.
{"title":"Predicting Waiting Time Overflow on Bank Teller Queues","authors":"Ricardo Silva Carvalho, Rommel N. Carvalho, G. N. Ramos, R. Mourão","doi":"10.1109/ICMLA.2017.00-51","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-51","url":null,"abstract":"This study proposes a predictive model to detect the delay in bank teller queues. Since there are penalties and fines applied to the branches that leave their clients waiting for a long time, detecting these cases as early as possible is essential. Four models were tested: one using a Queuing Theory's formula and the other three using Data Mining algorithms -- Deep Learning (DL), Gradient Boost Machine (GBM), and Random Forest (RF). The results indicated the GBM model as the most efficient, with an accuracy of 97% and a F1-measure of 75%.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"60 1","pages":"842-847"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89174076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}