Reservoir storage capacity has been investigated for independent and first-order dependent normal inflows. Besides the normal distribution, annual streamflows have also been found to follow the gamma distribution[4],[11] it is then useful to consider this situation. In the work of Phien[10], the inflows are assumed to follow the gamma distribution, and the mean value of the range (or the storage capacity) was derived analytically, and compared very well with the empirical formula obtained from Monte Carlo experiments in his earlier study. This paper considers the distribution of the storage capacity of reservoirs where the inflows are assumed to follow the first-order autoregressive model for gamma variables, denoted GAR(1) model. By means of computer simulation method, the annual inflows are generated, then the data for the partial sums and range are obtained for any given value of n, the life time (in years) of the reservoir under consideration. By theoretical analysis, a closed form formula for the variance of the sum of GAR(1) variables is derived. This formula is then used along with the empirical formula of Phien[10] to obtain an approximate expression for the mean value of the reservoir storage. The results computed from the approximate expression can be compared very well with those obtained from generated data. This means that the approximate expression obtained can be used to determine the mean range (or mean reservoir capacity) for any value of the parameters of the GAR(1) model found in practice.
{"title":"Computer simulation and approximate expression for the mean range of reservoir storage with GAR(1) inflows","authors":"N. Hung, Tran Quoc Chien","doi":"10.1145/2542050.2542055","DOIUrl":"https://doi.org/10.1145/2542050.2542055","url":null,"abstract":"Reservoir storage capacity has been investigated for independent and first-order dependent normal inflows. Besides the normal distribution, annual streamflows have also been found to follow the gamma distribution[4],[11] it is then useful to consider this situation. In the work of Phien[10], the inflows are assumed to follow the gamma distribution, and the mean value of the range (or the storage capacity) was derived analytically, and compared very well with the empirical formula obtained from Monte Carlo experiments in his earlier study. This paper considers the distribution of the storage capacity of reservoirs where the inflows are assumed to follow the first-order autoregressive model for gamma variables, denoted GAR(1) model. By means of computer simulation method, the annual inflows are generated, then the data for the partial sums and range are obtained for any given value of n, the life time (in years) of the reservoir under consideration. By theoretical analysis, a closed form formula for the variance of the sum of GAR(1) variables is derived. This formula is then used along with the empirical formula of Phien[10] to obtain an approximate expression for the mean value of the reservoir storage. The results computed from the approximate expression can be compared very well with those obtained from generated data. This means that the approximate expression obtained can be used to determine the mean range (or mean reservoir capacity) for any value of the parameters of the GAR(1) model found in practice.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134464984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The wide variation in features and capabilities of mobile devices lead to difficulties in the development of the same application on different platforms. Therefore we propose an online model-driven integrated development environment to provide developers with a platform-independent GUI design for mobile applications. Our proposed system transforms an abstract platform-independent GUI into a platform-dependent GUI on a target platform. The generated project is entirely in the form with which an experienced developer on the platform is already familiar. Furthermore, as the generated application is not a web-based one, it can access naturally native features of a platform. The proposed flexible architecture enable the capability to handle and update different abstract UI and non-UI controls needed to design GUIs for mobile applications. Experimental results with volunteers show that our proposed solution can save up to 25--51% time to create GUIs of an application to three different platforms of Android, iOS and Windows Phone.
{"title":"Online model-driven IDE to design GUIs for cross-platform mobile applications","authors":"Chi-Kien Diep, Q. Tran, Minh-Triet Tran","doi":"10.1145/2542050.2542083","DOIUrl":"https://doi.org/10.1145/2542050.2542083","url":null,"abstract":"The wide variation in features and capabilities of mobile devices lead to difficulties in the development of the same application on different platforms. Therefore we propose an online model-driven integrated development environment to provide developers with a platform-independent GUI design for mobile applications. Our proposed system transforms an abstract platform-independent GUI into a platform-dependent GUI on a target platform. The generated project is entirely in the form with which an experienced developer on the platform is already familiar. Furthermore, as the generated application is not a web-based one, it can access naturally native features of a platform. The proposed flexible architecture enable the capability to handle and update different abstract UI and non-UI controls needed to design GUIs for mobile applications. Experimental results with volunteers show that our proposed solution can save up to 25--51% time to create GUIs of an application to three different platforms of Android, iOS and Windows Phone.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent researches in cognitive science and document recognition have been applied to deal with the problem of categorizing object. Bag-of-Features (BoF) and its extension Spatial Pyramid Matching (SPM) have made a breakthrough in resolving this kind of challenges. Many methods followed this guideline really enhance the recognition accuracy but still have drawbacks in developing a real-world application whose data size is many times bigger. In this paper we propose two kinds of strategy include five criteria to evaluate and select the most appropriate training samples using for building a high performance classifier. We also suggest a method called reinforcement codebook learning to make the codebook training process not only purpose-built to best fits with the most suitable criteria but also much more efficient by reducing significantly its complexity of computation. Experiments on benchmark object dataset demonstrate that our proposed framework outperforms remarkable results and is comparable with the state-of-the-art in spite of using just 20% of 9 · 106 descriptors for training the dictionary. These results give a promise of building a efficient and feasible object categorization system for practical application as so as suggest some ideas to improve the visual feature representation in future.
{"title":"Toward a practical visual object recognition system","authors":"Mao Nguyen, M. Tran","doi":"10.1145/2542050.2542077","DOIUrl":"https://doi.org/10.1145/2542050.2542077","url":null,"abstract":"Recent researches in cognitive science and document recognition have been applied to deal with the problem of categorizing object. Bag-of-Features (BoF) and its extension Spatial Pyramid Matching (SPM) have made a breakthrough in resolving this kind of challenges. Many methods followed this guideline really enhance the recognition accuracy but still have drawbacks in developing a real-world application whose data size is many times bigger. In this paper we propose two kinds of strategy include five criteria to evaluate and select the most appropriate training samples using for building a high performance classifier. We also suggest a method called reinforcement codebook learning to make the codebook training process not only purpose-built to best fits with the most suitable criteria but also much more efficient by reducing significantly its complexity of computation. Experiments on benchmark object dataset demonstrate that our proposed framework outperforms remarkable results and is comparable with the state-of-the-art in spite of using just 20% of 9 · 106 descriptors for training the dictionary. These results give a promise of building a efficient and feasible object categorization system for practical application as so as suggest some ideas to improve the visual feature representation in future.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114167252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Nguyen, Thuy Thi Nguyen, R. Mullot, Thi-Thanh-Hai Tran, H. Le
Hand posture recognition has important applications in sign language, human machine interface, etc. In most such systems, the first and important step is hand detection. This paper presents a hand detection method based on internal features in an active boosting-based learning framework. The use of efficient Haar-like, local binary pattern and local orientation histogram as internal features allows fast computation of informative hand features for dealing with a great variety of hand appearances without background interference. Interactive boosting-based on-line learning allows efficiently training and improvement for the detector. Experimental results show that the proposed method outperforms the conventional methods on video data with complex background while using a smaller number of training samples. The proposed method is reliable for hand detection in the hand posture recognition system.
{"title":"A method for hand detection using internal features and active boosting-based learning","authors":"V. Nguyen, Thuy Thi Nguyen, R. Mullot, Thi-Thanh-Hai Tran, H. Le","doi":"10.1145/2542050.2542078","DOIUrl":"https://doi.org/10.1145/2542050.2542078","url":null,"abstract":"Hand posture recognition has important applications in sign language, human machine interface, etc. In most such systems, the first and important step is hand detection. This paper presents a hand detection method based on internal features in an active boosting-based learning framework. The use of efficient Haar-like, local binary pattern and local orientation histogram as internal features allows fast computation of informative hand features for dealing with a great variety of hand appearances without background interference. Interactive boosting-based on-line learning allows efficiently training and improvement for the detector. Experimental results show that the proposed method outperforms the conventional methods on video data with complex background while using a smaller number of training samples. The proposed method is reliable for hand detection in the hand posture recognition system.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131013971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, there is an explosion of Internet information, which is normally distributed on different sites. Hence, efficient finding information becomes difficult. Efficient query evaluation on distributed graphs is an important research topic since it can be used in real applications such as: social network analysis, web mining, ontology matching, etc. A widely-used query on distributed graphs is the regular reachability query (RRQ). A RRQ verifies whether a node can reach another node by a path satisfying a regular expression. Traditionally RRQs are evaluated by distributed depth-first search or distributed breadth-first search methods. However, these methods are restricted by the total network traffic and the response time on large graphs. Recently, Wenfei Fan et al. proposed an approach for improving reachability queries by visiting each site only once, but it has a communication bottleneck problem when assembling all distributed partial query results. In this paper, we propose two algorithms in order to improve Wenfei Fan's algorithm for RRQs. The first algorithm filters and removes redundant nodes/edges on each local site, in parallel. The second algorithm limits the data transfers by local contraction of the partial result. We extensively evaluated our algorithms on MapReduce using YouTube and DBLP datasets. The experimental results show that our method reduces unnecessary data transfers at most 60%, this solves the communication bottleneck problem.
{"title":"Minimizing data transfers for regular reachability queries on distributed graphs","authors":"Quyet Nguyen-Van, Le-Duc Tung, Zhenjiang Hu","doi":"10.1145/2542050.2542092","DOIUrl":"https://doi.org/10.1145/2542050.2542092","url":null,"abstract":"Nowadays, there is an explosion of Internet information, which is normally distributed on different sites. Hence, efficient finding information becomes difficult. Efficient query evaluation on distributed graphs is an important research topic since it can be used in real applications such as: social network analysis, web mining, ontology matching, etc. A widely-used query on distributed graphs is the regular reachability query (RRQ). A RRQ verifies whether a node can reach another node by a path satisfying a regular expression. Traditionally RRQs are evaluated by distributed depth-first search or distributed breadth-first search methods. However, these methods are restricted by the total network traffic and the response time on large graphs. Recently, Wenfei Fan et al. proposed an approach for improving reachability queries by visiting each site only once, but it has a communication bottleneck problem when assembling all distributed partial query results. In this paper, we propose two algorithms in order to improve Wenfei Fan's algorithm for RRQs. The first algorithm filters and removes redundant nodes/edges on each local site, in parallel. The second algorithm limits the data transfers by local contraction of the partial result. We extensively evaluated our algorithms on MapReduce using YouTube and DBLP datasets. The experimental results show that our method reduces unnecessary data transfers at most 60%, this solves the communication bottleneck problem.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124505805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our research concentrates on the energy efficiency in Wireless Sensor Network. One approach is based on the characteristics of environment -- the correlation among sensed data of nodes in a region. The sensor nodes are clustered into highly correlated regions (HCRs) to take advantage of correlation between sensor nodes in order to save energy. However, the determination of HCR is very complex in calculation, thus causes difficulty in implementation. This paper proposes a correlation-based approach that evaluates the correlation between two data sets using a simple calculation and that guarantees the accuracy in correlated evaluation between data. This correlation-based method is proposed to cluster sensor nodes into HCRs. Because of highly correlated characteristics among sensed data of nodes in the same HCRs, some high correlated-nodes would be inactive for energy saving. Simulation results show that the network lifetime of proposed system is 1.75 times longer than that of the conventional protocol.
{"title":"Correlation-based clustering in wireless sensor network for energy saving protocol","authors":"Nguyen Thi Thanh Nga, Son-Hong Ngo, Ngo Quynh Thu","doi":"10.1145/2542050.2542082","DOIUrl":"https://doi.org/10.1145/2542050.2542082","url":null,"abstract":"Our research concentrates on the energy efficiency in Wireless Sensor Network. One approach is based on the characteristics of environment -- the correlation among sensed data of nodes in a region. The sensor nodes are clustered into highly correlated regions (HCRs) to take advantage of correlation between sensor nodes in order to save energy. However, the determination of HCR is very complex in calculation, thus causes difficulty in implementation. This paper proposes a correlation-based approach that evaluates the correlation between two data sets using a simple calculation and that guarantees the accuracy in correlated evaluation between data. This correlation-based method is proposed to cluster sensor nodes into HCRs. Because of highly correlated characteristics among sensed data of nodes in the same HCRs, some high correlated-nodes would be inactive for energy saving. Simulation results show that the network lifetime of proposed system is 1.75 times longer than that of the conventional protocol.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"161 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116550513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thang Huynh Quyet, B. Thanh, Tien Do Van, Marc Bui, Son Ngo Hong
The Fourth Symposium on Information and Communication Technology (SoICT 2013) was held on December 5-6, 2013, in Da Nang, Vietnam. As the preceding editions, SoICT 2013 is a scientific forum which aims at bringing together researchers and practitioners for technical discussions and interactions on major computing topics. SoICT 2013 is organized by ACM Vietnam Chapter and School of Information and Communication Technology - Hanoi University of Science and Technology.
{"title":"Proceedings of the 4th Symposium on Information and Communication Technology","authors":"Thang Huynh Quyet, B. Thanh, Tien Do Van, Marc Bui, Son Ngo Hong","doi":"10.1145/2542050","DOIUrl":"https://doi.org/10.1145/2542050","url":null,"abstract":"The Fourth Symposium on Information and Communication Technology (SoICT 2013) was held on December 5-6, 2013, in Da Nang, Vietnam. As the preceding editions, SoICT 2013 is a scientific forum which aims at bringing together researchers and practitioners for technical discussions and interactions on major computing topics. SoICT 2013 is organized by ACM Vietnam Chapter and School of Information and Communication Technology - Hanoi University of Science and Technology.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126654754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using bilingual dictionaries is a common way for query translation in Cross Language Information Retrieval. In this article, we focus on Vietnamese-English Bilingual Information Retrieval and present algorithms for query segmentation, word disambiguation and re-ranking to improve the dictionary-based query translation approach. An evaluation environment is implemented to verify and compare the application of proposed algorithms with the baseline method using manual translation.
{"title":"Experiments with query translation and re-ranking methods in Vietnamese-English bilingual information retrieval","authors":"L. T. Giang, V. T. Hung, Huynh Cong Phap","doi":"10.1145/2542050.2542073","DOIUrl":"https://doi.org/10.1145/2542050.2542073","url":null,"abstract":"Using bilingual dictionaries is a common way for query translation in Cross Language Information Retrieval. In this article, we focus on Vietnamese-English Bilingual Information Retrieval and present algorithms for query segmentation, word disambiguation and re-ranking to improve the dictionary-based query translation approach. An evaluation environment is implemented to verify and compare the application of proposed algorithms with the baseline method using manual translation.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127067243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a feature selection approach for named entity recognition using genetic algorithm. Different aspects of genetic algorithm including computational time and criteria for evaluating an individual (i.e., size of the feature subset and the classifier's accuracy) are analyzed in order to optimize its learning process. Two machine learning algorithms, k-Nearest Neighbor and Conditional Random Fields, are used to calculate the accuracy of the named entity recognition system. To evaluate the effectiveness of our genetic algorithm, feature subsets returning by our proposed genetic algorithm are compared to feature subsets returning by a hill climbing algorithm and a backward one. Experimental results show that feature subsets obtained by our genetic algorithm is much smaller than the original feature set without losing of predictive accuracy. Furthermore, these feature subsets result in higher classifier's accuracies than that of the hill climbing algorithm and the backward one.
{"title":"Automatic feature selection for named entity recognition using genetic algorithm","authors":"H. T. Le, L. Tran","doi":"10.1145/2542050.2542056","DOIUrl":"https://doi.org/10.1145/2542050.2542056","url":null,"abstract":"This paper presents a feature selection approach for named entity recognition using genetic algorithm. Different aspects of genetic algorithm including computational time and criteria for evaluating an individual (i.e., size of the feature subset and the classifier's accuracy) are analyzed in order to optimize its learning process. Two machine learning algorithms, k-Nearest Neighbor and Conditional Random Fields, are used to calculate the accuracy of the named entity recognition system. To evaluate the effectiveness of our genetic algorithm, feature subsets returning by our proposed genetic algorithm are compared to feature subsets returning by a hill climbing algorithm and a backward one. Experimental results show that feature subsets obtained by our genetic algorithm is much smaller than the original feature set without losing of predictive accuracy. Furthermore, these feature subsets result in higher classifier's accuracies than that of the hill climbing algorithm and the backward one.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131527567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dong-Luong Dinh, Hee-Sok Han, H. J. Jeon, Sungyoung Lee, Tae-Seong Kim
Human pose estimation in real-time is a challenging problem in computer vision. In this paper, we present a novel approach to recover a 3D human pose in real-time from a single depth human silhouette using Principal Direction Analysis (PDA) on each recognized body part. In our work, the human body parts are first recognized from a depth human body silhouette via the trained Random Forests (RFs). On each recognized body part which is presented as a set of 3D points cloud, PDA is applied to estimate the principal direction of the body part. Finally, a 3D human pose gets recovered by mapping the principal directional vector to each body part of a 3D human body model which is created with a set of super-quadrics linked by the kinematic chains. In our experiments, we have performed quantitative and qualitative evaluations of the proposed 3D human pose reconstruction methodology. Our evaluation results show that the proposed approach performs reliably on a sequence of unconstrained poses and achieves an average reconstruction error of 7.46 degree in a few key joint angles. Our 3D pose recovery methodology should be applicable to many areas such as human computer interactions and human activity recognition.
{"title":"Principal direction analysis-based real-time 3D human pose reconstruction from a single depth image","authors":"Dong-Luong Dinh, Hee-Sok Han, H. J. Jeon, Sungyoung Lee, Tae-Seong Kim","doi":"10.1145/2542050.2542071","DOIUrl":"https://doi.org/10.1145/2542050.2542071","url":null,"abstract":"Human pose estimation in real-time is a challenging problem in computer vision. In this paper, we present a novel approach to recover a 3D human pose in real-time from a single depth human silhouette using Principal Direction Analysis (PDA) on each recognized body part. In our work, the human body parts are first recognized from a depth human body silhouette via the trained Random Forests (RFs). On each recognized body part which is presented as a set of 3D points cloud, PDA is applied to estimate the principal direction of the body part. Finally, a 3D human pose gets recovered by mapping the principal directional vector to each body part of a 3D human body model which is created with a set of super-quadrics linked by the kinematic chains. In our experiments, we have performed quantitative and qualitative evaluations of the proposed 3D human pose reconstruction methodology. Our evaluation results show that the proposed approach performs reliably on a sequence of unconstrained poses and achieves an average reconstruction error of 7.46 degree in a few key joint angles. Our 3D pose recovery methodology should be applicable to many areas such as human computer interactions and human activity recognition.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124369620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}