Pub Date : 2016-11-01DOI: 10.1109/TAAI.2016.7880155
Chun-Jung Chen, L. Shiau, Tien-Chi Chen
This paper presents a two layer recurrent neural network employed in glass speed control transmitted by linear servo motor in Automated Optical Inspection (AOI) system platform. The recurrent neural network consists of an identifier and a controller, the identifier is used to catch a feedback signal from the position sensor and the controller is processed in microprocessor in order to supply an adaptive PWM signal. The glass in AOI is transmitted and controlled by linear servo motor. The PWM was processed by dsPIC30F30XX series microprocessor. The performance of the proposed method was demonstrated very good performance. The theoretic formulations of the proposed neural networks were derived. The stability of the proposed method was also analyzed and demonstrated.
{"title":"Speed control in AOI system by using neural networks algorithm","authors":"Chun-Jung Chen, L. Shiau, Tien-Chi Chen","doi":"10.1109/TAAI.2016.7880155","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880155","url":null,"abstract":"This paper presents a two layer recurrent neural network employed in glass speed control transmitted by linear servo motor in Automated Optical Inspection (AOI) system platform. The recurrent neural network consists of an identifier and a controller, the identifier is used to catch a feedback signal from the position sensor and the controller is processed in microprocessor in order to supply an adaptive PWM signal. The glass in AOI is transmitted and controlled by linear servo motor. The PWM was processed by dsPIC30F30XX series microprocessor. The performance of the proposed method was demonstrated very good performance. The theoretic formulations of the proposed neural networks were derived. The stability of the proposed method was also analyzed and demonstrated.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124171087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/TAAI.2016.7880164
O. Y. Yuliana, Chia-Hui Chang
Web data extraction is an essential task for web data integration. Most researches focus on data extraction from list-pages by detecting data-rich section and record boundary segmentation. However, in detail-pages which contain all-inclusive product information in each page, so the number of data attributes need to be aligned is much larger. In this paper, we formulate data extraction problem as alignment of leaf nodes from DOM Trees. We propose AFIS, Annotation-Free Induction of Full Schema for detail pages in this paper. AFIS applies Divide-and-Conquer and Longest Increasing Sequence (LIS) algorithms to mine landmarks from input. The experiments show that AFIS outperforms RoadRunner, FivaTech and TEX (F1 0.990) in terms of selected data. For full schema evaluation (all data), AFIS also represents the highest average performance (F1 0.937) compared with TEX and RoadRunner.
{"title":"AFIS: Aligning detail-pages for full schema induction","authors":"O. Y. Yuliana, Chia-Hui Chang","doi":"10.1109/TAAI.2016.7880164","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880164","url":null,"abstract":"Web data extraction is an essential task for web data integration. Most researches focus on data extraction from list-pages by detecting data-rich section and record boundary segmentation. However, in detail-pages which contain all-inclusive product information in each page, so the number of data attributes need to be aligned is much larger. In this paper, we formulate data extraction problem as alignment of leaf nodes from DOM Trees. We propose AFIS, Annotation-Free Induction of Full Schema for detail pages in this paper. AFIS applies Divide-and-Conquer and Longest Increasing Sequence (LIS) algorithms to mine landmarks from input. The experiments show that AFIS outperforms RoadRunner, FivaTech and TEX (F1 0.990) in terms of selected data. For full schema evaluation (all data), AFIS also represents the highest average performance (F1 0.937) compared with TEX and RoadRunner.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116329192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/TAAI.2016.7880153
Yi-Lin Tsai, Yu-Chun Wang, Chen-Wei Chung, Shih-Chieh Su, Richard Tzong-Han Tsai
In recent years, researches of aspect-category-based sentiment analysis have been approached in terms of predefined categories. In this paper, we target two sub-tasks of SemEval-2014 Task 4 dedicated to aspect-based sentiment analysis: detecting aspect category and aspect category polarity. Also, a pre-identified set of aspect categories {food, price, service, ambience, miscellaneous} defined by SemEval-2014 have been used in this paper. The majority of the submissions worked on these two sub-tasks with machine learning mainly with n-grams and sentiment lexicon features. The difficulty for these submissions is that some opinion word (e.g., “good”) is general and cannot be referred to any particular category. By contrast, we use aspect-opinion pairs as one of the features in this paper to overcome this difficulty. To detect these pairs, we identify the opinion words in customer reviews, and then detect their related aspect terms by dependency rule. This system has been done on restaurant domain applying to Chinese customer reviews. Our experiment achieved 87.5% of accuracy by using Word2Vec to detect aspect category polarity. Aspect-opinion pair features employed in this system contribute to 88.3% of accuracy. When all features are employed, the accuracy is improved from 84.4% to 89.0%. Experimental results demonstrate the effectiveness of aspect-opinion pair features applied to the aspect-category-based sentiment classification system.
{"title":"Aspect-category-based sentiment classification with aspect-opinion relation","authors":"Yi-Lin Tsai, Yu-Chun Wang, Chen-Wei Chung, Shih-Chieh Su, Richard Tzong-Han Tsai","doi":"10.1109/TAAI.2016.7880153","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880153","url":null,"abstract":"In recent years, researches of aspect-category-based sentiment analysis have been approached in terms of predefined categories. In this paper, we target two sub-tasks of SemEval-2014 Task 4 dedicated to aspect-based sentiment analysis: detecting aspect category and aspect category polarity. Also, a pre-identified set of aspect categories {food, price, service, ambience, miscellaneous} defined by SemEval-2014 have been used in this paper. The majority of the submissions worked on these two sub-tasks with machine learning mainly with n-grams and sentiment lexicon features. The difficulty for these submissions is that some opinion word (e.g., “good”) is general and cannot be referred to any particular category. By contrast, we use aspect-opinion pairs as one of the features in this paper to overcome this difficulty. To detect these pairs, we identify the opinion words in customer reviews, and then detect their related aspect terms by dependency rule. This system has been done on restaurant domain applying to Chinese customer reviews. Our experiment achieved 87.5% of accuracy by using Word2Vec to detect aspect category polarity. Aspect-opinion pair features employed in this system contribute to 88.3% of accuracy. When all features are employed, the accuracy is improved from 84.4% to 89.0%. Experimental results demonstrate the effectiveness of aspect-opinion pair features applied to the aspect-category-based sentiment classification system.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122183322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/TAAI.2016.7880161
Naoki Iijima, M. Hayano, Ayumi Sugiyama, T. Sugawara
This paper proposes a task allocation method in which, although social utility is attempted to be maximized, agents also give weight to individual preferences based on their own specifications and capabilities. Due to the recent advances in computer and network technologies, many services can be provided by appropriately combining multiple types of information and different computational capabilities. The tasks that are carried out to perform these services are executed by allocating them to appropriate agents, which are computational entities having specific functionalities. However, these tasks are huge and appear simultaneously, and task allocation is thus a challenging issue since it is a combinatorial problem. The proposed method, which is based on our previous work, allocates resources/tasks to the appropriate agents by taking into account both social utility and individual preferences. We experimentally demonstrate that the appropriate strategy to decide the preference depends on the type of task and the features of the reward function as well as the social utility.
{"title":"Analysis of task allocation based on social utility and incompatible individual preference","authors":"Naoki Iijima, M. Hayano, Ayumi Sugiyama, T. Sugawara","doi":"10.1109/TAAI.2016.7880161","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880161","url":null,"abstract":"This paper proposes a task allocation method in which, although social utility is attempted to be maximized, agents also give weight to individual preferences based on their own specifications and capabilities. Due to the recent advances in computer and network technologies, many services can be provided by appropriately combining multiple types of information and different computational capabilities. The tasks that are carried out to perform these services are executed by allocating them to appropriate agents, which are computational entities having specific functionalities. However, these tasks are huge and appear simultaneously, and task allocation is thus a challenging issue since it is a combinatorial problem. The proposed method, which is based on our previous work, allocates resources/tasks to the appropriate agents by taking into account both social utility and individual preferences. We experimentally demonstrate that the appropriate strategy to decide the preference depends on the type of task and the features of the reward function as well as the social utility.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125446425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/TAAI.2016.7880159
Chi-Ruei Li, Addicam V. Sanjay, Shao-Wen Yang, Shou-de Lin
In this work, we address the problem of transfer learning for sequential recommendation model. Most of the state-of-the-art recommendation systems consider user preference and give customized results to different users. However, for those users without enough data, personalized recommendation systems cannot infer their preferences well or rank items precisely. Recently, transfer learning techniques are applied to address this problem. Although the lack of data in target domain may result in underfitting, data from auxiliary domains can be utilized to assist model training. Most of recommendation systems combined with transfer learning aim at the rating prediction problem whose user feedback is explicit and not sequential. In this paper, we apply transfer learning techniques to a model utilizing user preference and sequential information. To the best of our knowledge, no previous works have addressed the problem. Experiments on realworld datasets are conducted to demonstrate our framework is able to improve prediction accuracy by utilizing auxiliary data.
{"title":"Transfer learning for sequential recommendation model","authors":"Chi-Ruei Li, Addicam V. Sanjay, Shao-Wen Yang, Shou-de Lin","doi":"10.1109/TAAI.2016.7880159","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880159","url":null,"abstract":"In this work, we address the problem of transfer learning for sequential recommendation model. Most of the state-of-the-art recommendation systems consider user preference and give customized results to different users. However, for those users without enough data, personalized recommendation systems cannot infer their preferences well or rank items precisely. Recently, transfer learning techniques are applied to address this problem. Although the lack of data in target domain may result in underfitting, data from auxiliary domains can be utilized to assist model training. Most of recommendation systems combined with transfer learning aim at the rating prediction problem whose user feedback is explicit and not sequential. In this paper, we apply transfer learning techniques to a model utilizing user preference and sequential information. To the best of our knowledge, no previous works have addressed the problem. Experiments on realworld datasets are conducted to demonstrate our framework is able to improve prediction accuracy by utilizing auxiliary data.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126095142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, due to the rapid development of e-commerce, personalized recommendation systems have prevailed in product marketing. However, recommendation systems rely heavily on big data, creating a difficult situation for businesses at initial stages of development. We design several methods — including a traditional classifier, heuristic scoring, and machine learning — to build a recommendation system and integrate content-based collaborative filtering for a hybrid recommendation system using Co-Clustering with Augmented Matrices (CCAM). The source, which include users' persona from action taken in the app & Facebook as well as product information derived from the web. For this particular app, more than 50% users have clicks less than 10 times in 1.5 year leading to insufficient data. Thus, we face the challenge of a cold-start problem in analyzing user information. In order to obtain sufficient purchasing records, we analyzed frequent users and used web crawlers to enhance our item-based data, resulting in F-scores from 0.756 to 0.802. Heuristic scoring greatly enhances the efficiency of our recommendation system.
近年来,由于电子商务的快速发展,个性化推荐系统在产品营销中盛行。然而,推荐系统在很大程度上依赖于大数据,这给企业在发展的初始阶段带来了困难。我们设计了几种方法-包括传统分类器,启发式评分和机器学习-来构建推荐系统,并使用增强矩阵(CCAM)的协同聚类(Co-Clustering with Augmented Matrices)为混合推荐系统集成基于内容的协同过滤。来源,包括用户在应用程序和Facebook上采取的行动的角色,以及来自网络的产品信息。对于这个特殊的应用,超过50%的用户在一年半的时间里点击不到10次,导致数据不足。因此,我们在分析用户信息时面临冷启动问题的挑战。为了获得足够的购买记录,我们分析了频繁用户,并使用网络爬虫来增强我们的基于项目的数据,结果f分数从0.756提高到0.802。启发式评分大大提高了推荐系统的效率。
{"title":"User behavior analysis and commodity recommendation for point-earning apps","authors":"Yu-Ching Chen, Chia-Ching Yang, Yan-Jian Liau, Chia-Hui Chang, Pin-Liang Chen, Ping-Che Yang, Tsun Ku","doi":"10.1109/TAAI.2016.7880109","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880109","url":null,"abstract":"In recent years, due to the rapid development of e-commerce, personalized recommendation systems have prevailed in product marketing. However, recommendation systems rely heavily on big data, creating a difficult situation for businesses at initial stages of development. We design several methods — including a traditional classifier, heuristic scoring, and machine learning — to build a recommendation system and integrate content-based collaborative filtering for a hybrid recommendation system using Co-Clustering with Augmented Matrices (CCAM). The source, which include users' persona from action taken in the app & Facebook as well as product information derived from the web. For this particular app, more than 50% users have clicks less than 10 times in 1.5 year leading to insufficient data. Thus, we face the challenge of a cold-start problem in analyzing user information. In order to obtain sufficient purchasing records, we analyzed frequent users and used web crawlers to enhance our item-based data, resulting in F-scores from 0.756 to 0.802. Heuristic scoring greatly enhances the efficiency of our recommendation system.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115223623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/TAAI.2016.7880162
M. Domb, G. Leshem, Elisheva Bonchek-Dokow, Esther David, Yuh-Jye Lee
IoT systems collect vast amounts of data which can be used in order to track and analyze the structure of future recorded data. However, due to limited computational power, bandwith, and storage capabilities, this data cannot be stored as is, but rather must be reduced in such a way so that the abilities to analyze future data, based on past data, will not be compromised. We propose a parameterized method of sampling the data in an optimal way. Our method has three parameters — an averaging method for constructing an average data cycle from past observations, an envelope method for defining an interval around the average data cycle, and an entropy method for comparing new data cycles to the constructed envelope. These parameters can be adjusted according to the nature of the data, in order to find the optimal representation for classifying new cycles as well as for identifying anomalies and predicting future cycle behavior. In this work we concentrate on finding the optimal envelope, given an averaging method and an entropy method. We demonstrate with a case study of meteorological data regarding El Ninio years.
{"title":"Sparse sampling for sensing temporal data — building an optimized envelope","authors":"M. Domb, G. Leshem, Elisheva Bonchek-Dokow, Esther David, Yuh-Jye Lee","doi":"10.1109/TAAI.2016.7880162","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880162","url":null,"abstract":"IoT systems collect vast amounts of data which can be used in order to track and analyze the structure of future recorded data. However, due to limited computational power, bandwith, and storage capabilities, this data cannot be stored as is, but rather must be reduced in such a way so that the abilities to analyze future data, based on past data, will not be compromised. We propose a parameterized method of sampling the data in an optimal way. Our method has three parameters — an averaging method for constructing an average data cycle from past observations, an envelope method for defining an interval around the average data cycle, and an entropy method for comparing new data cycles to the constructed envelope. These parameters can be adjusted according to the nature of the data, in order to find the optimal representation for classifying new cycles as well as for identifying anomalies and predicting future cycle behavior. In this work we concentrate on finding the optimal envelope, given an averaging method and an entropy method. We demonstrate with a case study of meteorological data regarding El Ninio years.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"2000 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/taai.2016.7880104
H. Liao
In this talk, I will cover two topics which are closely related to AI. The first one is ``spatiotemporal learning of basketball offensive strategies’’ and the second one is ``learning to classify shot types.’’ Video-based group behavior analysis is drawing attention to its rich application in sports, military, surveillance and biological observations. Focusing specifically on the analysis of basketball offensive strategies, in the first topic we introduce a systematic approach to establishing unsupervised modeling of group behaviors and then use it to perform tactics classification. In the second topic, a deep-net based fusion strategy is proposed to classify shots in concert videos. Varying types of shots are fundamental elements in the language of film, commonly used by a visual storytelling director to convey the emotion, ideas, and art. To classify such types of shots from images, we present a new framework that facilitates the intriguing task by addressing two key issues. First, we learn more effective features by fusing the layer-wise outputs extracted from a deep convolutional neural network (CNN). We then introduce a probabilistic fusion model, termed error weighted deep cross-correlation model, to boost the classification accuracy. We provide extensive experiment results on a dataset of live concert videos to demonstrate the advantage of the proposed approach.
{"title":"Keynote speech: Keynote 1: It's all about AI","authors":"H. Liao","doi":"10.1109/taai.2016.7880104","DOIUrl":"https://doi.org/10.1109/taai.2016.7880104","url":null,"abstract":"In this talk, I will cover two topics which are closely related to AI. The first one is ``spatiotemporal learning of basketball offensive strategies’’ and the second one is ``learning to classify shot types.’’ Video-based group behavior analysis is drawing attention to its rich application in sports, military, surveillance and biological observations. Focusing specifically on the analysis of basketball offensive strategies, in the first topic we introduce a systematic approach to establishing unsupervised modeling of group behaviors and then use it to perform tactics classification. In the second topic, a deep-net based fusion strategy is proposed to classify shots in concert videos. Varying types of shots are fundamental elements in the language of film, commonly used by a visual storytelling director to convey the emotion, ideas, and art. To classify such types of shots from images, we present a new framework that facilitates the intriguing task by addressing two key issues. First, we learn more effective features by fusing the layer-wise outputs extracted from a deep convolutional neural network (CNN). We then introduce a probabilistic fusion model, termed error weighted deep cross-correlation model, to boost the classification accuracy. We provide extensive experiment results on a dataset of live concert videos to demonstrate the advantage of the proposed approach.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124945242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/TAAI.2016.7880165
Guanyao Li, Zi-Yi Wen, Wen-Yuan Zhu
With the popularity of smartphones, many users utilize the check-in function to share their current activity with their friends for more social interactions in location-based social networks (LBSNs). Due to the success of viral marketing for advertising, prior works have tried to exploit viral marketing for location promotion via check-in in LBSNs. This means that k users will be selected to check in at a target location to let as many users as possible check in by the propagation of check-in. However, the prior works discuss promoting only one location at a time. This is ineffective for retail chains to promote their retail stores since they have to select k users to promote for each retail store. In this paper, we focus on selecting k users who will check in at the location in a given bundle of locations to maximize the number of users who will check in at at least one location in the given location bundle by the information propagation in an LBSN. To solve this problem, we first propose the Multi-Location-aware Independent Cascade Model (MLICM) to describe the information of a bundle of locations propagated in an LBSN. Then, we propose algorithms to effectively and efficiently select k users based on MLICM. The experimental results show that our approach outperforms than that of the state-of-the-art approaches using two real datasets.
{"title":"Promoting a bundle of locations via viral marketing in location-based social networks","authors":"Guanyao Li, Zi-Yi Wen, Wen-Yuan Zhu","doi":"10.1109/TAAI.2016.7880165","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880165","url":null,"abstract":"With the popularity of smartphones, many users utilize the check-in function to share their current activity with their friends for more social interactions in location-based social networks (LBSNs). Due to the success of viral marketing for advertising, prior works have tried to exploit viral marketing for location promotion via check-in in LBSNs. This means that k users will be selected to check in at a target location to let as many users as possible check in by the propagation of check-in. However, the prior works discuss promoting only one location at a time. This is ineffective for retail chains to promote their retail stores since they have to select k users to promote for each retail store. In this paper, we focus on selecting k users who will check in at the location in a given bundle of locations to maximize the number of users who will check in at at least one location in the given location bundle by the information propagation in an LBSN. To solve this problem, we first propose the Multi-Location-aware Independent Cascade Model (MLICM) to describe the information of a bundle of locations propagated in an LBSN. Then, we propose algorithms to effectively and efficiently select k users based on MLICM. The experimental results show that our approach outperforms than that of the state-of-the-art approaches using two real datasets.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133434781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/TAAI.2016.7880172
Chih-Yu Lin, Chih-Chieh Hung, Po-Ruey Lei
By the rise of mobile devices, trajectory data could be easily collected and used in several applications, like destination prediction, public transportation optimization, and travel route recommendation. However, due to the spatio-temporal nature, raw trajectory data usually contain redundant movement information. This observation motivates the trajectory simplication approaches which discard some points with preserving some specific features, such as position features, direction features, and so on. Most of existing simplifications ignore the importance of velocity features. This paper proposes an adaptive trajectory approaches while taking the velocity feature into account. Specifically, the Adaptive Trajectory Simplification (ATS) algorithm is proposed, which not only preserves the position feature, but the velocity feature from the given trajectories. ATS algorithm groups the velocity values into several intervals, which are used to partition trajectories into velocity-preserving segments. The simplified trajectory could be derived by applying the position-preserving simplification approach on each segment, where the threshold in a position-preserving approach could be determined without manual setting. Extensive experiments are conducted by using a real trajectory dataset in Porto. The experimental results show ATS algorithm could simplify trajectories effectively while preserving the velocity feature and the position feature at the same time.
{"title":"A velocity-preserving trajectory simplification approach","authors":"Chih-Yu Lin, Chih-Chieh Hung, Po-Ruey Lei","doi":"10.1109/TAAI.2016.7880172","DOIUrl":"https://doi.org/10.1109/TAAI.2016.7880172","url":null,"abstract":"By the rise of mobile devices, trajectory data could be easily collected and used in several applications, like destination prediction, public transportation optimization, and travel route recommendation. However, due to the spatio-temporal nature, raw trajectory data usually contain redundant movement information. This observation motivates the trajectory simplication approaches which discard some points with preserving some specific features, such as position features, direction features, and so on. Most of existing simplifications ignore the importance of velocity features. This paper proposes an adaptive trajectory approaches while taking the velocity feature into account. Specifically, the Adaptive Trajectory Simplification (ATS) algorithm is proposed, which not only preserves the position feature, but the velocity feature from the given trajectories. ATS algorithm groups the velocity values into several intervals, which are used to partition trajectories into velocity-preserving segments. The simplified trajectory could be derived by applying the position-preserving simplification approach on each segment, where the threshold in a position-preserving approach could be determined without manual setting. Extensive experiments are conducted by using a real trajectory dataset in Porto. The experimental results show ATS algorithm could simplify trajectories effectively while preserving the velocity feature and the position feature at the same time.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122453652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}