首页 > 最新文献

2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)最新文献

英文 中文
A Cluster Analysis of Challenging Behaviors in Autism Spectrum Disorder 自闭症谱系障碍挑战行为的聚类分析
Elizabeth Stevens, Abigail Atchison, Laura Stevens, Esther Hong, D. Granpeesheh, Dennis R. Dixon, Erik J. Linstead
We apply cluster analysis to a sample of 2,116 children with Autism Spectrum Disorder in order to identify patterns of challenging behaviors observed in home and centerbased clinical settings. The largest study of this type to date, and the first to employ machine learning, our results indicate that while the presence of multiple challenging behaviors is common, in most cases a dominant behavior emerges. Furthermore, the trend is also observed when we train our cluster models on the male and female samples separately. This work provides a basis for future studies to understand the relationship of challenging behavior profiles to learning outcomes, with the ultimate goal of providing personalized therapeutic interventions with maximum efficacy and minimum time and cost.
我们对2116名自闭症谱系障碍儿童的样本进行了聚类分析,以确定在家庭和中心临床环境中观察到的具有挑战性的行为模式。这是迄今为止这类研究中规模最大的,也是第一个使用机器学习的研究,我们的研究结果表明,虽然多种具有挑战性的行为很常见,但在大多数情况下,会出现一种主导行为。此外,当我们分别在男性和女性样本上训练聚类模型时,也观察到这种趋势。这项工作为未来的研究提供了基础,以了解挑战性行为特征与学习结果的关系,最终目标是提供最有效、最短时间和成本的个性化治疗干预措施。
{"title":"A Cluster Analysis of Challenging Behaviors in Autism Spectrum Disorder","authors":"Elizabeth Stevens, Abigail Atchison, Laura Stevens, Esther Hong, D. Granpeesheh, Dennis R. Dixon, Erik J. Linstead","doi":"10.1109/ICMLA.2017.00-85","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-85","url":null,"abstract":"We apply cluster analysis to a sample of 2,116 children with Autism Spectrum Disorder in order to identify patterns of challenging behaviors observed in home and centerbased clinical settings. The largest study of this type to date, and the first to employ machine learning, our results indicate that while the presence of multiple challenging behaviors is common, in most cases a dominant behavior emerges. Furthermore, the trend is also observed when we train our cluster models on the male and female samples separately. This work provides a basis for future studies to understand the relationship of challenging behavior profiles to learning outcomes, with the ultimate goal of providing personalized therapeutic interventions with maximum efficacy and minimum time and cost.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"26 1","pages":"661-666"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91228224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Human Motion Trajectory Analysis Based Video Summarization 基于视频摘要的人体运动轨迹分析
Muhammad Ajmal, M. Naseer, Farooq Ahmad, Asma Saleem
Multimedia technology is growing day by day and contributing towards enormous amount of video data especially in the area of security surveillance. The browsing through such a large collection of videos is a challenging and time-consuming task. Despite the advancement in technology automatic browsing, retrieval, manipulation and analysis of large videos are still far behind. In this paper a fully automatic human-centric system for video summarization is proposed. In most of the surveillance applications, human motion is of great interest. In proposed system the moving parts in the video are detected using background subtraction, and blobs are extracted from the binary image. Human detection is done through Histogram of Oriented Gradient (HOG) using Support Vector Machine (SVM) classifier. Then, motion of humans is tracked through consecutive frames using Kalman filter, and trajectory of each person is extracted. The analysis of trajectory leads to a meaningful summary which covers only important parts of video. One can also mark region of interest to be included in the summary. Experimental results show the proposed system reduces long video into meaningful summary and saves a lot of time and cost in terms of storage, indexing and browsing effort.
多媒体技术日益发展,产生了海量的视频数据,特别是在安全监控领域。浏览如此庞大的视频集是一项具有挑战性且耗时的任务。尽管技术进步了,但大型视频的自动浏览、检索、处理和分析仍然远远落后。本文提出了一种以人为中心的全自动视频摘要系统。在大多数监视应用中,人体运动是非常有趣的。该系统采用背景减法检测视频中的运动部分,并从二值图像中提取斑点。采用支持向量机(SVM)分类器,通过定向梯度直方图(HOG)进行人体检测。然后,利用卡尔曼滤波对人的运动进行连续帧跟踪,提取每个人的运动轨迹;对轨迹的分析得出了一个有意义的总结,该总结只涵盖了视频的重要部分。还可以在摘要中标记感兴趣的区域。实验结果表明,该系统将长视频压缩为有意义的摘要,在存储、索引和浏览方面节省了大量的时间和成本。
{"title":"Human Motion Trajectory Analysis Based Video Summarization","authors":"Muhammad Ajmal, M. Naseer, Farooq Ahmad, Asma Saleem","doi":"10.1109/ICMLA.2017.0-103","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.0-103","url":null,"abstract":"Multimedia technology is growing day by day and contributing towards enormous amount of video data especially in the area of security surveillance. The browsing through such a large collection of videos is a challenging and time-consuming task. Despite the advancement in technology automatic browsing, retrieval, manipulation and analysis of large videos are still far behind. In this paper a fully automatic human-centric system for video summarization is proposed. In most of the surveillance applications, human motion is of great interest. In proposed system the moving parts in the video are detected using background subtraction, and blobs are extracted from the binary image. Human detection is done through Histogram of Oriented Gradient (HOG) using Support Vector Machine (SVM) classifier. Then, motion of humans is tracked through consecutive frames using Kalman filter, and trajectory of each person is extracted. The analysis of trajectory leads to a meaningful summary which covers only important parts of video. One can also mark region of interest to be included in the summary. Experimental results show the proposed system reduces long video into meaningful summary and saves a lot of time and cost in terms of storage, indexing and browsing effort.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"11 1","pages":"550-555"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87563256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deep Learning Based Link Failure Mitigation 基于深度学习的链路故障缓解
Shubham Khunteta, Ashok Kumar Reddy Chavva
Link failure is a cause of a major concern for network operators in enhancing user experience in present system and upcoming 5G systems as well. There are many factors which can cause link failures, for example Handover (HO) failures, poor coverage and congested cells. Network operators are constantly improving their coverage qualities to overcome these issues. However reducing the link failures needs further improvements for the present and next generation (5G) systems. In this paper, we study applicability of Machine Learning (ML) algorithms to reduce link failure at handover. In the method proposed, Signal conditions (RSRP/RSRQ) are continuously observed and tracked using Deep Neural Networks such as Recurrent Neural Network (RNN) or Long Short Term Memory network (LSTM) and thus behavior of these signal conditions are taken as inputs to another neural network which acts as a classifier classifying event in either HO fail or success in advance. This advance in decision allows UE to take action to mitigate the possible link failure. Algorithms and model proposed in this paper are first of its kind connecting the link between past signal conditions and future HO result. We show the performance of the proposed algorithms for both system simulated and field log data. Given the need for more proactive role of UE in most of the link level decision in 5G systems, algorithms proposed in this paper are more relevant.
链路故障是网络运营商在当前系统和即将推出的5G系统中提高用户体验的主要问题。导致链路故障的因素有很多,例如切换(HO)故障、覆盖差和蜂窝拥塞。为了克服这些问题,网络运营商正在不断提高其覆盖质量。然而,减少链路故障需要对当前和下一代(5G)系统进行进一步改进。在本文中,我们研究了机器学习(ML)算法在减少链路切换故障方面的适用性。在所提出的方法中,使用递归神经网络(RNN)或长短期记忆网络(LSTM)等深度神经网络连续观察和跟踪信号条件(RSRP/RSRQ),从而将这些信号条件的行为作为另一个神经网络的输入,该神经网络作为分类器,在HO失败或成功时提前对事件进行分类。这种决策的进步使UE能够采取行动来减轻可能的链路故障。本文提出的算法和模型首次将过去的信号条件与未来的HO结果联系起来。我们展示了所提出的算法在系统模拟和现场测井数据中的性能。鉴于在5G系统的大多数链路级决策中需要UE发挥更主动的作用,本文提出的算法更具相关性。
{"title":"Deep Learning Based Link Failure Mitigation","authors":"Shubham Khunteta, Ashok Kumar Reddy Chavva","doi":"10.1109/ICMLA.2017.00-58","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-58","url":null,"abstract":"Link failure is a cause of a major concern for network operators in enhancing user experience in present system and upcoming 5G systems as well. There are many factors which can cause link failures, for example Handover (HO) failures, poor coverage and congested cells. Network operators are constantly improving their coverage qualities to overcome these issues. However reducing the link failures needs further improvements for the present and next generation (5G) systems. In this paper, we study applicability of Machine Learning (ML) algorithms to reduce link failure at handover. In the method proposed, Signal conditions (RSRP/RSRQ) are continuously observed and tracked using Deep Neural Networks such as Recurrent Neural Network (RNN) or Long Short Term Memory network (LSTM) and thus behavior of these signal conditions are taken as inputs to another neural network which acts as a classifier classifying event in either HO fail or success in advance. This advance in decision allows UE to take action to mitigate the possible link failure. Algorithms and model proposed in this paper are first of its kind connecting the link between past signal conditions and future HO result. We show the performance of the proposed algorithms for both system simulated and field log data. Given the need for more proactive role of UE in most of the link level decision in 5G systems, algorithms proposed in this paper are more relevant.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"188 1","pages":"806-811"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79407028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Machine Learning Methods for 1D Ultrasound Breast Cancer Screening 一维超声乳腺癌筛查的机器学习方法
Neil J. Joshi, Seth D. Billings, Erika Schwartz, S. Harvey, P. Burlina
This study addresses the development of machine learning methods for reduced space ultrasound to perform automated prescreening of breast cancer. The use of ultrasound in low-resource settings is constrained by lack of trained personnel and equipment costs, and motivates the need for automated, low-cost diagnostic tools. We hypothesize a solution to this problem is the use of 1D ultrasound (single piezoelectric element). We leverage random forest classifiers to classify 1D samples of various types of tissue phantoms simulating cancerous, benign lesions, and non-cancerous tissues. In addition, we investigate the optimal ultrasound power and frequency parameters to maximize performance. We show preliminary results on 2-, 3- and 5-class classification problems for the ideal power/frequency combination. These results demonstrate promise towards the use of a single-element ultrasound device to screen for breast cancer.
本研究解决了用于减少空间超声的机器学习方法的发展,以执行乳腺癌的自动预筛查。在资源匮乏的环境中,超声的使用受到缺乏训练有素的人员和设备成本的限制,并激发了对自动化、低成本诊断工具的需求。我们假设解决这个问题的方法是使用一维超声(单压电元件)。我们利用随机森林分类器对模拟癌变、良性病变和非癌变组织的各种类型组织幻象的1D样本进行分类。此外,我们研究了最佳的超声功率和频率参数,以最大限度地提高性能。我们展示了理想功率/频率组合的2级、3级和5级分类问题的初步结果。这些结果显示了使用单元件超声设备筛查乳腺癌的前景。
{"title":"Machine Learning Methods for 1D Ultrasound Breast Cancer Screening","authors":"Neil J. Joshi, Seth D. Billings, Erika Schwartz, S. Harvey, P. Burlina","doi":"10.1109/ICMLA.2017.00-76","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-76","url":null,"abstract":"This study addresses the development of machine learning methods for reduced space ultrasound to perform automated prescreening of breast cancer. The use of ultrasound in low-resource settings is constrained by lack of trained personnel and equipment costs, and motivates the need for automated, low-cost diagnostic tools. We hypothesize a solution to this problem is the use of 1D ultrasound (single piezoelectric element). We leverage random forest classifiers to classify 1D samples of various types of tissue phantoms simulating cancerous, benign lesions, and non-cancerous tissues. In addition, we investigate the optimal ultrasound power and frequency parameters to maximize performance. We show preliminary results on 2-, 3- and 5-class classification problems for the ideal power/frequency combination. These results demonstrate promise towards the use of a single-element ultrasound device to screen for breast cancer.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"19 1","pages":"711-715"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82189961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Recognition of Dynamic Hand Gestures from 3D Motion Data Using LSTM and CNN Architectures 基于LSTM和CNN架构的3D动态手势识别
Chinmaya R. Naguri, Razvan C. Bunescu
Hand gestures provide a natural, non-verbal form of communication that can augment or replace other communication modalities such as speech or writing. Along with voice commands, hand gestures are becoming the primary means of interaction in games, augmented reality, and virtual reality platforms. Recognition accuracy, flexibility, and computational cost are some of the primary factors that can impact the incorporation of hand gestures in these new technologies, as well as their subsequent retrieval from multimodal corpora. In this paper, we present fast and highly accurate gesture recognition systems based on long short-term memory (LSTM) and convolutional neural networks (CNN) that are trained to process input sequences of 3D hand positions and velocities acquired from infrared sensors. When evaluated on real time recognition of six types of hand gestures, the proposed architectures obtain 97% F-measure, demonstrating a significant potential for practical applications in novel human-computer interfaces.
手势提供了一种自然的、非语言的交流形式,可以增强或取代其他交流方式,如说话或写作。与语音命令一样,手势正在成为游戏、增强现实和虚拟现实平台中交互的主要手段。识别的准确性、灵活性和计算成本是影响这些新技术中手势的结合以及随后从多模态语料库中检索手势的一些主要因素。在本文中,我们提出了基于长短期记忆(LSTM)和卷积神经网络(CNN)的快速高精度手势识别系统,该系统被训练来处理从红外传感器获取的3D手部位置和速度的输入序列。当对六种类型手势的实时识别进行评估时,所提出的架构获得了97%的f测量值,显示出在新型人机界面中的实际应用的巨大潜力。
{"title":"Recognition of Dynamic Hand Gestures from 3D Motion Data Using LSTM and CNN Architectures","authors":"Chinmaya R. Naguri, Razvan C. Bunescu","doi":"10.1109/ICMLA.2017.00013","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00013","url":null,"abstract":"Hand gestures provide a natural, non-verbal form of communication that can augment or replace other communication modalities such as speech or writing. Along with voice commands, hand gestures are becoming the primary means of interaction in games, augmented reality, and virtual reality platforms. Recognition accuracy, flexibility, and computational cost are some of the primary factors that can impact the incorporation of hand gestures in these new technologies, as well as their subsequent retrieval from multimodal corpora. In this paper, we present fast and highly accurate gesture recognition systems based on long short-term memory (LSTM) and convolutional neural networks (CNN) that are trained to process input sequences of 3D hand positions and velocities acquired from infrared sensors. When evaluated on real time recognition of six types of hand gestures, the proposed architectures obtain 97% F-measure, demonstrating a significant potential for practical applications in novel human-computer interfaces.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"62 1","pages":"1130-1133"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81458681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Transfer Learning for Large Scale Data Using Subspace Alignment 基于子空间对齐的大规模数据迁移学习
Nassara Elhadji-Ille-Gado, E. Grall-Maës, M. Kharouf
A major assumption in many machine learning algorithms is that the training and testing data must come from the same feature space or have the same distributions. However, in real applications, this strong hypothesis does not hold. In this paper, we introduce a new framework for transfer where the source and target domains are represented by subspaces described by eigenvector matrices. To unify subspace distribution between domains, we propose to use a fast efficient approximative SVD for fast features generation. In order to make a transfer learning between domains, we firstly use a subspace learning approach to develop a domain adaption algorithm where only target knowledge is transferable. Secondly, we use subspace alignment trick to propose a novel transfer domain adaptation method. To evaluate the proposal, we use large-scale data sets. Numerical results, based on accuracy and computational time are provided with comparison with state-of-the-art methods.
许多机器学习算法的一个主要假设是训练和测试数据必须来自相同的特征空间或具有相同的分布。然而,在实际应用中,这个强有力的假设并不成立。本文引入了一种新的传输框架,其中源域和目标域由特征向量矩阵描述的子空间表示。为了统一域间的子空间分布,我们提出了一种快速高效的近似奇异值分解来快速生成特征。为了实现领域间的迁移学习,我们首先利用子空间学习方法开发了一种只有目标知识可迁移的领域自适应算法。其次,利用子空间对准技巧提出了一种新的传递域自适应方法。为了评估该建议,我们使用了大规模的数据集。给出了基于精度和计算时间的数值结果,并与现有方法进行了比较。
{"title":"Transfer Learning for Large Scale Data Using Subspace Alignment","authors":"Nassara Elhadji-Ille-Gado, E. Grall-Maës, M. Kharouf","doi":"10.1109/ICMLA.2017.00-20","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-20","url":null,"abstract":"A major assumption in many machine learning algorithms is that the training and testing data must come from the same feature space or have the same distributions. However, in real applications, this strong hypothesis does not hold. In this paper, we introduce a new framework for transfer where the source and target domains are represented by subspaces described by eigenvector matrices. To unify subspace distribution between domains, we propose to use a fast efficient approximative SVD for fast features generation. In order to make a transfer learning between domains, we firstly use a subspace learning approach to develop a domain adaption algorithm where only target knowledge is transferable. Secondly, we use subspace alignment trick to propose a novel transfer domain adaptation method. To evaluate the proposal, we use large-scale data sets. Numerical results, based on accuracy and computational time are provided with comparison with state-of-the-art methods.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"28 1","pages":"1006-1010"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78951565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using Short URLs in Tweets to Improve Twitter Opinion Mining 使用tweet中的短url来改进Twitter的意见挖掘
A. Pavel, V. Palade, R. Iqbal, Diana Hintea
Using short URLs in Twitter messages has increased in popularity in the past few years. This is mostly due to the fact that Twitter, as one of the most popular social media networks, imposes a 140 character limit to the messages distributed over the network. This paper analyzes the use of short URLs by Twitter users. Specifically, the goal is to examine the content pointed by the short URLs as well as the potential impact on the performance of sentiment analysis (opinion mining) tasks. Opinion mining based on Twitter feed has been used in an array of applications, including healthcare, identifying public opinion on political issues, financial modeling and advertising. Past research has however completely disregarded tweets which contain URLs. It is not hard to see how opinion mining can be improved considering the fact that Twitter users regularly post URLs pointing to articles endorsing a particular political figure, articles in important financial outlets or reviews of products. This study is based on the analysis of three distinct Twitter datasets with varying number of tweets which include short URLs. Popular machine learning techniques used in opinion mining were deployed in different experimental settings to conclude which are the most lucrative options.
在过去几年中,在Twitter消息中使用短url越来越受欢迎。这主要是由于Twitter作为最受欢迎的社交媒体网络之一,对通过网络分发的消息施加了140个字符的限制。本文分析了Twitter用户对短url的使用情况。具体来说,目标是检查短url指向的内容以及对情感分析(意见挖掘)任务性能的潜在影响。基于Twitter feed的意见挖掘已被用于一系列应用程序,包括医疗保健、识别政治问题的公众意见、金融建模和广告。然而,过去的研究完全忽略了包含url的推文。考虑到Twitter用户经常发布指向支持特定政治人物的文章、重要金融机构的文章或产品评论的url,不难看出意见挖掘可以如何改进。本研究基于对三个不同的Twitter数据集的分析,这些数据集包含不同数量的包含短url的tweet。在意见挖掘中使用的流行机器学习技术被部署在不同的实验环境中,以得出哪些是最有利可图的选择。
{"title":"Using Short URLs in Tweets to Improve Twitter Opinion Mining","authors":"A. Pavel, V. Palade, R. Iqbal, Diana Hintea","doi":"10.1109/ICMLA.2017.00-28","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-28","url":null,"abstract":"Using short URLs in Twitter messages has increased in popularity in the past few years. This is mostly due to the fact that Twitter, as one of the most popular social media networks, imposes a 140 character limit to the messages distributed over the network. This paper analyzes the use of short URLs by Twitter users. Specifically, the goal is to examine the content pointed by the short URLs as well as the potential impact on the performance of sentiment analysis (opinion mining) tasks. Opinion mining based on Twitter feed has been used in an array of applications, including healthcare, identifying public opinion on political issues, financial modeling and advertising. Past research has however completely disregarded tweets which contain URLs. It is not hard to see how opinion mining can be improved considering the fact that Twitter users regularly post URLs pointing to articles endorsing a particular political figure, articles in important financial outlets or reviews of products. This study is based on the analysis of three distinct Twitter datasets with varying number of tweets which include short URLs. Popular machine learning techniques used in opinion mining were deployed in different experimental settings to conclude which are the most lucrative options.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"3 1","pages":"965-970"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86158905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Hybrid Scheme for Fault Diagnosis with Partially Labeled Sets of Observations 部分标记观测集故障诊断的混合方案
R. Razavi-Far, Ehsan Hallaji, M. Saif, L. Rueda
Machine learning techniques are widely used for diagnosing faults to guarantee the safe and reliable operation of the systems. Among various techniques, semi-supervised learning can help in diagnosing faulty states and decision making in partially labeled data, where only a few number of labeled observations along with a large number of unlabeled observations are collected from the process. Thus, it is crucial to conduct a critical study on the use of semi-supervised techniques for both dimensionality reduction and fault classification. In this work, three state-of-the- art semi-supervised dimensionality reduction techniques are used to produce informative features for semi-supervised fault classifiers. This study aims to achieve the best pair of the semisupervised dimensionality reduction and classification techniques that can be integrated into the diagnostic scheme for decision making under partially labeled sets of observations.
机器学习技术被广泛应用于故障诊断,以保证系统的安全可靠运行。在各种技术中,半监督学习可以帮助诊断部分标记数据中的错误状态和决策,其中只有少量标记观察值和大量未标记观察值从该过程中收集。因此,对半监督技术在降维和断层分类中的应用进行批判性研究是至关重要的。在这项工作中,使用了三种最先进的半监督降维技术来为半监督故障分类器产生信息特征。本研究旨在实现半监督降维和分类技术的最佳组合,并将其集成到诊断方案中,以便在部分标记的观察集下进行决策。
{"title":"A Hybrid Scheme for Fault Diagnosis with Partially Labeled Sets of Observations","authors":"R. Razavi-Far, Ehsan Hallaji, M. Saif, L. Rueda","doi":"10.1109/ICMLA.2017.0-177","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.0-177","url":null,"abstract":"Machine learning techniques are widely used for diagnosing faults to guarantee the safe and reliable operation of the systems. Among various techniques, semi-supervised learning can help in diagnosing faulty states and decision making in partially labeled data, where only a few number of labeled observations along with a large number of unlabeled observations are collected from the process. Thus, it is crucial to conduct a critical study on the use of semi-supervised techniques for both dimensionality reduction and fault classification. In this work, three state-of-the- art semi-supervised dimensionality reduction techniques are used to produce informative features for semi-supervised fault classifiers. This study aims to achieve the best pair of the semisupervised dimensionality reduction and classification techniques that can be integrated into the diagnostic scheme for decision making under partially labeled sets of observations.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"79 1","pages":"61-67"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83036508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Spatio - Temporal Hedonic House Regression Model 一个时空享乐之家回归模型
T. Oladunni, Sharad Sharma, Raymond Tiwang
This work focuses on an algorithmic investigation of the housing market spanning 11 years using the hedonic pricing theory. An improved pricing model will benefit home buyers and sellers, real estate agents and appraisers, government and mortgage lenders. Hedonic pricing theory is an econometric concept that explains the market value of a differentiated commodity using implicit pricing. Exploiting the spatial dependent nature of the housing market, we created new submarkets. A model was built with the new submarket, while another one was built using the existing submarket. Random forest and LASSO were trained with the two models. We argue that our approach has a considerable impact on the dimension of a spatio–temporal hedonic house pricing model without a significant reduction in its performance.
这项工作的重点是使用享乐定价理论对房地产市场进行为期11年的算法调查。改进后的定价模式将有利于购房者和卖家、房地产经纪人和评估师、政府和抵押贷款机构。享乐定价理论是一个计量经济学概念,它用隐性定价来解释差异化商品的市场价值。利用住房市场的空间依赖性,我们创造了新的子市场。利用新的子市场建立模型,利用现有的子市场建立模型。用这两个模型训练随机森林和LASSO。我们认为,我们的方法对一个时空享乐主义房屋定价模型的维度有相当大的影响,而不会显著降低其性能。
{"title":"A Spatio - Temporal Hedonic House Regression Model","authors":"T. Oladunni, Sharad Sharma, Raymond Tiwang","doi":"10.1109/ICMLA.2017.00-94","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-94","url":null,"abstract":"This work focuses on an algorithmic investigation of the housing market spanning 11 years using the hedonic pricing theory. An improved pricing model will benefit home buyers and sellers, real estate agents and appraisers, government and mortgage lenders. Hedonic pricing theory is an econometric concept that explains the market value of a differentiated commodity using implicit pricing. Exploiting the spatial dependent nature of the housing market, we created new submarkets. A model was built with the new submarket, while another one was built using the existing submarket. Random forest and LASSO were trained with the two models. We argue that our approach has a considerable impact on the dimension of a spatio–temporal hedonic house pricing model without a significant reduction in its performance.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1022 1","pages":"607-612"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88308296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Predicting Waiting Time Overflow on Bank Teller Queues 预测银行柜员队列的等待时间溢出
Ricardo Silva Carvalho, Rommel N. Carvalho, G. N. Ramos, R. Mourão
This study proposes a predictive model to detect the delay in bank teller queues. Since there are penalties and fines applied to the branches that leave their clients waiting for a long time, detecting these cases as early as possible is essential. Four models were tested: one using a Queuing Theory's formula and the other three using Data Mining algorithms -- Deep Learning (DL), Gradient Boost Machine (GBM), and Random Forest (RF). The results indicated the GBM model as the most efficient, with an accuracy of 97% and a F1-measure of 75%.
本研究提出一种预测模型来侦测银行柜员排队的延迟。因为让客户长时间等待的分行会受到处罚和罚款,所以尽早发现这些案件是至关重要的。测试了四个模型:一个使用排队论公式,另外三个使用数据挖掘算法——深度学习(DL)、梯度增强机(GBM)和随机森林(RF)。结果表明,GBM模型是最有效的,准确率为97%,f1测量值为75%。
{"title":"Predicting Waiting Time Overflow on Bank Teller Queues","authors":"Ricardo Silva Carvalho, Rommel N. Carvalho, G. N. Ramos, R. Mourão","doi":"10.1109/ICMLA.2017.00-51","DOIUrl":"https://doi.org/10.1109/ICMLA.2017.00-51","url":null,"abstract":"This study proposes a predictive model to detect the delay in bank teller queues. Since there are penalties and fines applied to the branches that leave their clients waiting for a long time, detecting these cases as early as possible is essential. Four models were tested: one using a Queuing Theory's formula and the other three using Data Mining algorithms -- Deep Learning (DL), Gradient Boost Machine (GBM), and Random Forest (RF). The results indicated the GBM model as the most efficient, with an accuracy of 97% and a F1-measure of 75%.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"60 1","pages":"842-847"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89174076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1