Recently, neural network-based methods have shown impressive performances in captioning task. There have been numerous attempts with many proposed architectures to solve this captioning problem. In this paper, we present the evaluation of different alternatives in architecture and optimization algorithms for a neural image captioning model. First, we present the study of a image captioning model that is comprised of two modules -- a convolutional neural network which encodes the input image into a fixed-dimensional feature vector and a recurrent neural network to decode that representation into a sequence of words describing the input image. After that, we consider different alternatives regarding architecture and optimization algorithm to train the model. We conduct a set of experiments on standard benchmark datasets to evaluate different aspects of the captioning system using standard evaluation methods that are utilized in image captioning literatures. Based on the results of those experiments, we propose several suggestions on architecture and optimization algorithm of the image captioning model that is balanced in terms of the performance and the feasibility to be deployed on real-world problems with commodity hardware.
{"title":"Factors Influencing The Performance of Image Captioning Model: An Evaluation","authors":"Duc-Cuong Dao, Thi-Oanh Nguyen, S. Bressan","doi":"10.1145/3007120.3007136","DOIUrl":"https://doi.org/10.1145/3007120.3007136","url":null,"abstract":"Recently, neural network-based methods have shown impressive performances in captioning task. There have been numerous attempts with many proposed architectures to solve this captioning problem. In this paper, we present the evaluation of different alternatives in architecture and optimization algorithms for a neural image captioning model. First, we present the study of a image captioning model that is comprised of two modules -- a convolutional neural network which encodes the input image into a fixed-dimensional feature vector and a recurrent neural network to decode that representation into a sequence of words describing the input image. After that, we consider different alternatives regarding architecture and optimization algorithm to train the model. We conduct a set of experiments on standard benchmark datasets to evaluate different aspects of the captioning system using standard evaluation methods that are utilized in image captioning literatures. Based on the results of those experiments, we propose several suggestions on architecture and optimization algorithm of the image captioning model that is balanced in terms of the performance and the feasibility to be deployed on real-world problems with commodity hardware.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116420709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yosra Ben Dhief, Y. Djemaiel, S. Rekhis, N. Boudriga
The infrastructures of Supervisory Control and Data Acquisition (SCADA) systems have evolved through time in order to provide more efficient supervision services. Despite the changes made on SCADA architectures, several enhancements are still required to address the need for: a) large scale supervision using a high number of sensors, b) reduction of the reaction time when a malicious activity is detected; and c) the assurance of a high interoperability between SCADA systems in order to prevent the propagation of incidents. In this context, we propose a novel sensor cloud based SCADA infrastructure to monitor large scale and inter-dependant critical infrastructures, making an effective use of sensor clouds to increase the supervision coverage and the processing time. It ensures also the interoperability between interdependent SCADAs by offering a set of services to SCADA, which are created through the use of templates and are associated to set of virtual sensors. A simulation is conducted to demonstrate the effectiveness of the proposed architecture.
{"title":"A Novel Sensor Cloud Based SCADA infrastructure for Monitoring and Attack prevention","authors":"Yosra Ben Dhief, Y. Djemaiel, S. Rekhis, N. Boudriga","doi":"10.1145/3007120.3007169","DOIUrl":"https://doi.org/10.1145/3007120.3007169","url":null,"abstract":"The infrastructures of Supervisory Control and Data Acquisition (SCADA) systems have evolved through time in order to provide more efficient supervision services. Despite the changes made on SCADA architectures, several enhancements are still required to address the need for: a) large scale supervision using a high number of sensors, b) reduction of the reaction time when a malicious activity is detected; and c) the assurance of a high interoperability between SCADA systems in order to prevent the propagation of incidents. In this context, we propose a novel sensor cloud based SCADA infrastructure to monitor large scale and inter-dependant critical infrastructures, making an effective use of sensor clouds to increase the supervision coverage and the processing time. It ensures also the interoperability between interdependent SCADAs by offering a set of services to SCADA, which are created through the use of templates and are associated to set of virtual sensors. A simulation is conducted to demonstrate the effectiveness of the proposed architecture.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117143788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the user experience design approach taken for a mobile cognitive assessment tool. Taking a multidisciplinary approach with user centred assessment and feedback, the design of this tool was tailored to provide a usable and intuitive user experience. Key to user participation is ease of use and minimal time on task for participant engagement. To address this selected measures were carefully considered as to make the testing process simple and easy to engage with. Following a pre-validated, iterative design approach, an acceptable and engaging user experience was designed while retaining the ability to measure multiple aspects of a user's condition and environment. Accurate assessment of cognitive fatigue requires a wide range of environmental user data in order to understand the participant's current cognitive fatigue levels, therefore measures of physical, cognitive, social and emotional aspects were included within the application.
{"title":"User Centred Design of a Smartphone-based Cognitive Fatigue Assessment Application","authors":"Edward Price, G. Moore, L. Galway, M. Linden","doi":"10.1145/3007120.3007122","DOIUrl":"https://doi.org/10.1145/3007120.3007122","url":null,"abstract":"This paper presents the user experience design approach taken for a mobile cognitive assessment tool. Taking a multidisciplinary approach with user centred assessment and feedback, the design of this tool was tailored to provide a usable and intuitive user experience. Key to user participation is ease of use and minimal time on task for participant engagement. To address this selected measures were carefully considered as to make the testing process simple and easy to engage with. Following a pre-validated, iterative design approach, an acceptable and engaging user experience was designed while retaining the ability to measure multiple aspects of a user's condition and environment. Accurate assessment of cognitive fatigue requires a wide range of environmental user data in order to understand the participant's current cognitive fatigue levels, therefore measures of physical, cognitive, social and emotional aspects were included within the application.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122808986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biometrics have become important for authentication on mobile devices, e.g. to unlock devices before using them. One way to protect biometric information stored on mobile devices from disclosure is using embedded smart cards (SCs) with biometric match-on-card (MOC) approaches. Computational restrictions of SCs thereby also limit biometric matching procedures. We present a mobile MOC approach that uses offline training to obtain authentication models with a simplistic internal representation in the final trained state, whereat we adapt features and model representation to enable their usage on SCs. The obtained model is used within SCs on mobile devices without requiring retraining when enrolling individual users. We apply our approach to acceleration based mobile gait authentication, using a 16 bit integer range Java Card, and evaluate authentication performance and computation time on the SC using a publicly available dataset. Results indicate that our approach is feasible with an equal error rate of ~12% and a computation time below 2s on the SC, including data transmissions and computations. To the best of our knowledge, this thereby represents the first practically feasible approach towards acceleration based gait match-on-card authentication.
{"title":"Mobile Gait Match-on-Card Authentication from Acceleration Data with Offline-Simplified Models","authors":"R. Findling, M. Hölzl, R. Mayrhofer","doi":"10.1145/3007120.3007132","DOIUrl":"https://doi.org/10.1145/3007120.3007132","url":null,"abstract":"Biometrics have become important for authentication on mobile devices, e.g. to unlock devices before using them. One way to protect biometric information stored on mobile devices from disclosure is using embedded smart cards (SCs) with biometric match-on-card (MOC) approaches. Computational restrictions of SCs thereby also limit biometric matching procedures. We present a mobile MOC approach that uses offline training to obtain authentication models with a simplistic internal representation in the final trained state, whereat we adapt features and model representation to enable their usage on SCs. The obtained model is used within SCs on mobile devices without requiring retraining when enrolling individual users. We apply our approach to acceleration based mobile gait authentication, using a 16 bit integer range Java Card, and evaluate authentication performance and computation time on the SC using a publicly available dataset. Results indicate that our approach is feasible with an equal error rate of ~12% and a computation time below 2s on the SC, including data transmissions and computations. To the best of our knowledge, this thereby represents the first practically feasible approach towards acceleration based gait match-on-card authentication.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125693940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile and distributed systems involve multiple mobile computers processing data and communicating the results to each other, such as in electronic commerce or online voting, where the users are geographically separated. Our contribution is on mobile distributed applications based on embedded platforms such as smartphones or tablets. We provide a definition of a protocol called MEXP which stands for Mobile Exchange eXperiment Protocol. It allows the exposure of local resources on a mobile device to other mobile computers of the distributed system. The kinds of resources are pictures and sounds which are recorded with a mobile device during lab activities. They require the use of a local Wi-Fi network for the security of the recorded data. The lab activities evolve over time and the observers have remote access to the pictures and sounds for validation and tagging. This work has resulted in the acceptance of our mobile distributed application by an academic training team in the Biology department.
{"title":"A mobile distributed system for remote resource access","authors":"C. Dumont, F. Mourlin, Laurent Nel","doi":"10.1145/3007120.3007123","DOIUrl":"https://doi.org/10.1145/3007120.3007123","url":null,"abstract":"Mobile and distributed systems involve multiple mobile computers processing data and communicating the results to each other, such as in electronic commerce or online voting, where the users are geographically separated. Our contribution is on mobile distributed applications based on embedded platforms such as smartphones or tablets. We provide a definition of a protocol called MEXP which stands for Mobile Exchange eXperiment Protocol. It allows the exposure of local resources on a mobile device to other mobile computers of the distributed system. The kinds of resources are pictures and sounds which are recorded with a mobile device during lab activities. They require the use of a local Wi-Fi network for the security of the recorded data. The lab activities evolve over time and the observers have remote access to the pictures and sounds for validation and tagging. This work has resulted in the acceptance of our mobile distributed application by an academic training team in the Biology department.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114764813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The domain of image processing technologies comprises many methods and algorithms for the analysis of signals, representing data sets, as photos or videos. In this paper we present a discussion and analysis, on the one hand, of classical image processing methods, as Fourier transformation, and, on the other hand, of neural networks. Specifically we focus on multi-layer and convolutional neural networks and give guidelines how images can be analyzed effectively and efficiently. To speed up the performance we identify various parallel software and hardware environments and evaluate, how parallelism can be used to improve performance of neural network operations. Based on our findings we derive several guidelines for applying different parallelization approaches on various sequential and parallel hardware infrastructure.
{"title":"Implementation Guidelines for Image Processing with Convolutional Neural Networks","authors":"Florian Bordes, E. Schikuta","doi":"10.1145/3007120.3007165","DOIUrl":"https://doi.org/10.1145/3007120.3007165","url":null,"abstract":"The domain of image processing technologies comprises many methods and algorithms for the analysis of signals, representing data sets, as photos or videos. In this paper we present a discussion and analysis, on the one hand, of classical image processing methods, as Fourier transformation, and, on the other hand, of neural networks. Specifically we focus on multi-layer and convolutional neural networks and give guidelines how images can be analyzed effectively and efficiently. To speed up the performance we identify various parallel software and hardware environments and evaluate, how parallelism can be used to improve performance of neural network operations. Based on our findings we derive several guidelines for applying different parallelization approaches on various sequential and parallel hardware infrastructure.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121389506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many systems that provide users with an electronic identity (eID) to sign documents or authenticate to online services (e.g. governmental eIDs, OpenID). However, current solutions lack in providing proper techniques to use them as regular ID cards that digitally authenticate their holders to another physical person in the real world. We envision a fully mobile eID which provides such functionality in a privacy-preserving manner, fulfills requirements for governmental identities with high security demands (such as driving licenses, or passports) and can be used in the private domain (e.g. as loyalty cards). In this paper, we present potential use cases for such a flexible and privacy-preserving mobile eID and discuss the concept of privacy-preserving attribute queries. Furthermore, we formalize necessary functional, mobile, security, and privacy requirements, and present a brief overview of potential techniques to cover all of them.
{"title":"Real-World Identification: Towards a Privacy-Aware Mobile eID for Physical and Offline Verification","authors":"M. Hölzl, Michael Roland, R. Mayrhofer","doi":"10.1145/3007120.3007158","DOIUrl":"https://doi.org/10.1145/3007120.3007158","url":null,"abstract":"There are many systems that provide users with an electronic identity (eID) to sign documents or authenticate to online services (e.g. governmental eIDs, OpenID). However, current solutions lack in providing proper techniques to use them as regular ID cards that digitally authenticate their holders to another physical person in the real world. We envision a fully mobile eID which provides such functionality in a privacy-preserving manner, fulfills requirements for governmental identities with high security demands (such as driving licenses, or passports) and can be used in the private domain (e.g. as loyalty cards). In this paper, we present potential use cases for such a flexible and privacy-preserving mobile eID and discuss the concept of privacy-preserving attribute queries. Furthermore, we formalize necessary functional, mobile, security, and privacy requirements, and present a brief overview of potential techniques to cover all of them.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hassani, P. D. Haghighi, P. Jayaraman, A. Zaslavsky, Sea Ling, A. Medvedev
As the standardization efforts for IoT is fast progressing, we will quickly get to a point where context derived from IoT data and relations will be the underpinning factor to enable interaction between "smart things". Therefore, having a generic approach for describing and querying context is crucial for the future of IoT applications. In this paper, we propose Context Definition and Query Language (CDQL), an advanced approach that enables things to exchange context. CDQL consists of two main parts: Context Definition Model, which is designed to describe the contextual attributes and context related capabilities of each "thing"; and Context Query Language (CQL), which is a flexible query language to express contextual information requirements without considering details of the underlying data structure. We exemplify the usage of the proposed CDQL, via a smart city use case study that highlight how CDQL can be utilized to deliver context information to IoT applications.
{"title":"CDQL: A Generic Context Representation and Querying Approach for Internet of Things Applications","authors":"A. Hassani, P. D. Haghighi, P. Jayaraman, A. Zaslavsky, Sea Ling, A. Medvedev","doi":"10.1145/3007120.3007137","DOIUrl":"https://doi.org/10.1145/3007120.3007137","url":null,"abstract":"As the standardization efforts for IoT is fast progressing, we will quickly get to a point where context derived from IoT data and relations will be the underpinning factor to enable interaction between \"smart things\". Therefore, having a generic approach for describing and querying context is crucial for the future of IoT applications. In this paper, we propose Context Definition and Query Language (CDQL), an advanced approach that enables things to exchange context. CDQL consists of two main parts: Context Definition Model, which is designed to describe the contextual attributes and context related capabilities of each \"thing\"; and Context Query Language (CQL), which is a flexible query language to express contextual information requirements without considering details of the underlying data structure. We exemplify the usage of the proposed CDQL, via a smart city use case study that highlight how CDQL can be utilized to deliver context information to IoT applications.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the major challenges in activity recognition task is the need to adapt a classification model during its operation. This is important because the underlying data distribution between those used for training and the new evolving stream of data may change during online recognition. The changes between the two sessions may occur because of differences in sensor placement, orientation and user characteristics such as age and gender. However, many of the existing approaches for model adaptation in activity recognition are blind methods because they continuously adapt the classification model without explicit detection of changes in the concepts being predicted. Therefore, we propose a concept change detection method for activity recognition under the assumption that a concept change in the model of an activity is followed by changes in the distribution of the input data attributes as well which is the realistic case for activity recognition. Our change detection method computes change detection statistic on stream of multi-dimensional unlabelled data that are classified into different concept windows. The values of the change indicators are then processed for detecting peak points that indicate concept change in the stream of activity data. Evaluation of the approach using real activity recognition dataset shows consistent detections that correlate with the error rate of the model.
{"title":"UDetect: Unsupervised Concept Change Detection for Mobile Activity Recognition","authors":"S. Bashir, Andrei V. Petrovski, D. Doolan","doi":"10.1145/3007120.3007144","DOIUrl":"https://doi.org/10.1145/3007120.3007144","url":null,"abstract":"One of the major challenges in activity recognition task is the need to adapt a classification model during its operation. This is important because the underlying data distribution between those used for training and the new evolving stream of data may change during online recognition. The changes between the two sessions may occur because of differences in sensor placement, orientation and user characteristics such as age and gender. However, many of the existing approaches for model adaptation in activity recognition are blind methods because they continuously adapt the classification model without explicit detection of changes in the concepts being predicted. Therefore, we propose a concept change detection method for activity recognition under the assumption that a concept change in the model of an activity is followed by changes in the distribution of the input data attributes as well which is the realistic case for activity recognition. Our change detection method computes change detection statistic on stream of multi-dimensional unlabelled data that are classified into different concept windows. The values of the change indicators are then processed for detecting peak points that indicate concept change in the stream of activity data. Evaluation of the approach using real activity recognition dataset shows consistent detections that correlate with the error rate of the model.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ting Chen, H. Sahli, Yanning Zhang, Tao Yang, Lingyan Ran
The compressive sensing trackers, which utilize a very sparse measurement matrix to capture the targets' appearance model, perform well when the tracked targets are well defined. However, such trackers often run into drifting problems due to the fact that the tracking result is a bounding box which also includes background information, especially in the case of occlusion and low contrast situations. In this paper, we propose an online compressive tracking algorithm based on superpixel segmentation (SPCT). The proposed algorithm employs a weighted multi-scale random measurement matrix along with an efficient superpixel segmentation to preserve the image structure of the targets during tracking. The superpixel segmentation is used to distinguish the target from its surrounding background, to obtain the weighted features within the bounding box. Furthermore, a feedback strategy is also proposed to update the classifier model to reduce the drifting risk. Extensive experimental results have demonstrated that our proposed algorithm outperforms several state-of-the-art tracking algorithms as well as the compressive trackers.
{"title":"Compressive Tracking based on Superpixel Segmentation","authors":"Ting Chen, H. Sahli, Yanning Zhang, Tao Yang, Lingyan Ran","doi":"10.1145/3007120.3011074","DOIUrl":"https://doi.org/10.1145/3007120.3011074","url":null,"abstract":"The compressive sensing trackers, which utilize a very sparse measurement matrix to capture the targets' appearance model, perform well when the tracked targets are well defined. However, such trackers often run into drifting problems due to the fact that the tracking result is a bounding box which also includes background information, especially in the case of occlusion and low contrast situations. In this paper, we propose an online compressive tracking algorithm based on superpixel segmentation (SPCT). The proposed algorithm employs a weighted multi-scale random measurement matrix along with an efficient superpixel segmentation to preserve the image structure of the targets during tracking. The superpixel segmentation is used to distinguish the target from its surrounding background, to obtain the weighted features within the bounding box. Furthermore, a feedback strategy is also proposed to update the classifier model to reduce the drifting risk. Extensive experimental results have demonstrated that our proposed algorithm outperforms several state-of-the-art tracking algorithms as well as the compressive trackers.","PeriodicalId":394387,"journal":{"name":"Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127964844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}