Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043864
Yu Hua, D. Feng
Real-time aggregate queries can help obtain interested summary of traffic information on the road. However, due to unreliable connection and limited duration in Vehicular Ad hoc Networks (VANETs), it is difficult to carry out the online computation over all received traffic messages. In order to improve query accuracy and provide quick query response, we propose a novel scheme for real-time aggregate queries, called Road Cube, which essentially makes use of precomputation on interested traffic messages. We utilize Information Retrieval (IR) technique to identify interested information that potentially shows semantic correlation and can be indexed in future with high probability. The Road Cube improves upon conventional data cube by exploiting semantic correlation of multi-dimensional attributes existing in received traffic information so as to obtain partial materialization. The partial materialization usually satisfies real-time and space requirements in VANETs. Extensive performance evaluation based on real-world map and traffic models shows that the Road Cube obtains significant performance improvements, compared with the conventional approaches.
{"title":"A correlation-aware partial materialization scheme for near real-time automotive queries","authors":"Yu Hua, D. Feng","doi":"10.1109/SMARTCOMP.2014.7043864","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043864","url":null,"abstract":"Real-time aggregate queries can help obtain interested summary of traffic information on the road. However, due to unreliable connection and limited duration in Vehicular Ad hoc Networks (VANETs), it is difficult to carry out the online computation over all received traffic messages. In order to improve query accuracy and provide quick query response, we propose a novel scheme for real-time aggregate queries, called Road Cube, which essentially makes use of precomputation on interested traffic messages. We utilize Information Retrieval (IR) technique to identify interested information that potentially shows semantic correlation and can be indexed in future with high probability. The Road Cube improves upon conventional data cube by exploiting semantic correlation of multi-dimensional attributes existing in received traffic information so as to obtain partial materialization. The partial materialization usually satisfies real-time and space requirements in VANETs. Extensive performance evaluation based on real-world map and traffic models shows that the Road Cube obtains significant performance improvements, compared with the conventional approaches.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131331471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043867
Tian Wang, Zhen Peng, Yonghong Chen, Yiqiao Cai, H. Tian
Tracking mobile targets is one of the most important applications in wireless sensor networks (WSNs). Traditional tracking solutions are based on fixed sensor nodes and have two critical problems. First, in WSNs, the energy constraint is a main concern, but due to the mobility of targets, lots of sensor nodes in WSNs have to switch between active and sleep states frequently, which causes excessive energy consumption. Second, when there are holes in the deployment area, targets may fail to be detected while moving in the holes. To solve these problems, this paper exploits a few of mobile sensor nodes to continuously track mobile targets because the energy capacity of mobile nodes is less constrained. Based on a realistic detection model, a solution for scheduling mobile nodes to cooperate with ordinary fixed nodes is proposed. When targets move, mobile nodes move along with them for tracking. The results of extensive simulations show that mobile nodes help to track the target when holes appears in the coverage area and extend the effective monitoring time. Moreover, the proposed solution can effectively reduce the energy consumption of sensor nodes and prolong the lifetime of the networks.
{"title":"Continuous tracking for mobile targets with mobility nodes in WSNs","authors":"Tian Wang, Zhen Peng, Yonghong Chen, Yiqiao Cai, H. Tian","doi":"10.1109/SMARTCOMP.2014.7043867","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043867","url":null,"abstract":"Tracking mobile targets is one of the most important applications in wireless sensor networks (WSNs). Traditional tracking solutions are based on fixed sensor nodes and have two critical problems. First, in WSNs, the energy constraint is a main concern, but due to the mobility of targets, lots of sensor nodes in WSNs have to switch between active and sleep states frequently, which causes excessive energy consumption. Second, when there are holes in the deployment area, targets may fail to be detected while moving in the holes. To solve these problems, this paper exploits a few of mobile sensor nodes to continuously track mobile targets because the energy capacity of mobile nodes is less constrained. Based on a realistic detection model, a solution for scheduling mobile nodes to cooperate with ordinary fixed nodes is proposed. When targets move, mobile nodes move along with them for tracking. The results of extensive simulations show that mobile nodes help to track the target when holes appears in the coverage area and extend the effective monitoring time. Moreover, the proposed solution can effectively reduce the energy consumption of sensor nodes and prolong the lifetime of the networks.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131718595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043848
Shiwei Wang, Xiaoling Wu, Hainan Chen, Yanwen Wang, Daiping Li
Software Defined Network (SDN) has long been a research focus since born from the lab of Stanford University. Researches on traditional home networks are faced with a series of challenges due to the ever more complicated user demands. The application of SDN to the home network is an effective approach in coping with it. Now the research on the SDN based home network is in its preliminary stage. Therefore, for better user experience, it is essential to effectively manage and utilize the resources of the home network. The general slicing strategies don't show much advantage in performance within the home networks due to the increased user demands and applications. In this paper, we introduce an advanced SDN based home network prototype and analyze its compositions and application requirements. By implementing and comparing several slicing strategies in properties, we achieve an optimized slicing strategy according to the specified home network circumstance and our preference.
{"title":"An optimal slicing strategy for SDN based smart home network","authors":"Shiwei Wang, Xiaoling Wu, Hainan Chen, Yanwen Wang, Daiping Li","doi":"10.1109/SMARTCOMP.2014.7043848","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043848","url":null,"abstract":"Software Defined Network (SDN) has long been a research focus since born from the lab of Stanford University. Researches on traditional home networks are faced with a series of challenges due to the ever more complicated user demands. The application of SDN to the home network is an effective approach in coping with it. Now the research on the SDN based home network is in its preliminary stage. Therefore, for better user experience, it is essential to effectively manage and utilize the resources of the home network. The general slicing strategies don't show much advantage in performance within the home networks due to the increased user demands and applications. In this paper, we introduce an advanced SDN based home network prototype and analyze its compositions and application requirements. By implementing and comparing several slicing strategies in properties, we achieve an optimized slicing strategy according to the specified home network circumstance and our preference.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043851
Bin Li, Jian Zhang, Zheng Zhang, Yong Xu
This paper proposes a novel people counting method based on head detection and tracking to evaluate the number of people who move under an over-head camera. There are four main parts in the proposed method: foreground extraction, head detection, head tracking, and crossing-line judgment. The proposed method first utilizes an effective foreground extraction method to obtain foreground regions of moving people, and some morphological operations are employed to optimize the foreground regions. Then it exploits a LBP feature based Adaboost classifier for head detection in the optimized foreground regions. After head detection is performed, the candidate head object is tracked by a local head tracking method based on Meanshift algorithm. Based on head tracking, the method finally uses crossing-line judgment to determine whether the candidate head object will be counted or not. Experiments show that our method can obtain promising people counting accuracy about 96% and acceptable computation speed under different circumstances.
{"title":"A people counting method based on head detection and tracking","authors":"Bin Li, Jian Zhang, Zheng Zhang, Yong Xu","doi":"10.1109/SMARTCOMP.2014.7043851","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043851","url":null,"abstract":"This paper proposes a novel people counting method based on head detection and tracking to evaluate the number of people who move under an over-head camera. There are four main parts in the proposed method: foreground extraction, head detection, head tracking, and crossing-line judgment. The proposed method first utilizes an effective foreground extraction method to obtain foreground regions of moving people, and some morphological operations are employed to optimize the foreground regions. Then it exploits a LBP feature based Adaboost classifier for head detection in the optimized foreground regions. After head detection is performed, the candidate head object is tracked by a local head tracking method based on Meanshift algorithm. Based on head tracking, the method finally uses crossing-line judgment to determine whether the candidate head object will be counted or not. Experiments show that our method can obtain promising people counting accuracy about 96% and acceptable computation speed under different circumstances.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130123838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043840
Wenming Yang, Xiang Chen, Q. Liao
A new method for air quality evaluation using only visible image analysis is introduced in this paper. Based on the fact that suspended particles in air are visible, we attempted to use visible images to develop an appropriate measure which can be closely related to the density of suspended particles (namely the values of PM2.5 and PM10). Furthermore, using this measure, we can evaluate the values of PM2.5 and PM10 via digital image processing. Combined with water droplets, suspended particles in air can form fog or haze. Based on the monochrome atmospheric scattering model, which has been widely used to describe the formation of a haze image, we propose a measure, normalized first-order absolute sum of high-frequency spectrum (NFAS) and attempt to investigate its relationship with the values of PM2.5 and PM10. The experimental results showed the proposed measure is closely related to PM2.5 and has a relation with PM10.
{"title":"Evaluation of PM2.5 and PM10 using normalized first-order absolute sum of high-frequency spectrum","authors":"Wenming Yang, Xiang Chen, Q. Liao","doi":"10.1109/SMARTCOMP.2014.7043840","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043840","url":null,"abstract":"A new method for air quality evaluation using only visible image analysis is introduced in this paper. Based on the fact that suspended particles in air are visible, we attempted to use visible images to develop an appropriate measure which can be closely related to the density of suspended particles (namely the values of PM2.5 and PM10). Furthermore, using this measure, we can evaluate the values of PM2.5 and PM10 via digital image processing. Combined with water droplets, suspended particles in air can form fog or haze. Based on the monochrome atmospheric scattering model, which has been widely used to describe the formation of a haze image, we propose a measure, normalized first-order absolute sum of high-frequency spectrum (NFAS) and attempt to investigate its relationship with the values of PM2.5 and PM10. The experimental results showed the proposed measure is closely related to PM2.5 and has a relation with PM10.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133508892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043836
Yu Han, G. Baciu, Chen Xu
In this paper, we propose a new fuzzy-based variational model that efficiently computes partitioning of speckled images, such as images obtained from Synthetic Aperture Radar (SAR). The model is derived by using the so-called maximizing a posteriori (MAP) estimation method. The novelties of the model are: (1) the Gamma distribution rather than the classical Gaussian distribution is used to model the gray intensities in each homogeneous region of the images (Gamma distribution function is better suited for speckled images); (2) an adaptive weighted regularization term with respect to a fuzzy membership function is designed to protect the segmentation results from degeneration (being over-smoothed). Compared with the classical total variation (TV) regularizer, the proposed regularization term has a sparser property. In addition, a new alternative direction iteration algorithm is proposed to solve the model. The algorithm is efficient since it integrates the split Bregman method and the Chambolle's projection method. Numerical examples are given to verify the efficiency of our model.
{"title":"A MAP estimation based segmentation model for speckled images","authors":"Yu Han, G. Baciu, Chen Xu","doi":"10.1109/SMARTCOMP.2014.7043836","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043836","url":null,"abstract":"In this paper, we propose a new fuzzy-based variational model that efficiently computes partitioning of speckled images, such as images obtained from Synthetic Aperture Radar (SAR). The model is derived by using the so-called maximizing a posteriori (MAP) estimation method. The novelties of the model are: (1) the Gamma distribution rather than the classical Gaussian distribution is used to model the gray intensities in each homogeneous region of the images (Gamma distribution function is better suited for speckled images); (2) an adaptive weighted regularization term with respect to a fuzzy membership function is designed to protect the segmentation results from degeneration (being over-smoothed). Compared with the classical total variation (TV) regularizer, the proposed regularization term has a sparser property. In addition, a new alternative direction iteration algorithm is proposed to solve the model. The algorithm is efficient since it integrates the split Bregman method and the Chambolle's projection method. Numerical examples are given to verify the efficiency of our model.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043856
M. Alli, Ling Feng
Nowadays the usage of mobile phones is widely spread in our daily life. We use mobile phones as a camera, radio, music player, and even an internet browser. As most Web pages were originally designed for desktop computers with large screens, viewing them on smaller displays involves a number of horizontal and vertical page scrolling. To save mobile Web search fatigue caused by repeated scrolling, we investigate the automatic Web page scrolling problem based on two observations. First, every web page has many different parts that do not have the equal importance to an end user, and the user is often interested in a certain part of the Web page. Second, the ease of use of text-entry in mobile phones compare to the desktop computers', users usually prefer to search the Web just once and get the needed answer. Compared to the existing efforts on page layout modification and content splitting for easy page navigation on mobile displays, we present a simple yet effective approach of automatic page scrolling for mobile Web search, while keeping the original Web page content keeps its integrity and hence, preventing any loss of information. We work with the Document Object Model (DOM) of the clicked page by user, compute the relevance of each paragraph of the Web page based on the tf*idf (term frequency*inverse document frequency) values of user's search keywords occurring in that paragraph. The focus of the browser will be automatically scrolled to the most relevant one. Our user study shows that the proposed approach can achieve 96.47% scrolling accuracy under one search keyword, and 94.78% under multiple search keywords, while the time spending in computing the most important part does not vary much from the number of search keywords. The users can save up to 1.5 sec in searching and finding the needed information compare to the best case of our user study.
{"title":"Automatic page scrolling for mobile Web search","authors":"M. Alli, Ling Feng","doi":"10.1109/SMARTCOMP.2014.7043856","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043856","url":null,"abstract":"Nowadays the usage of mobile phones is widely spread in our daily life. We use mobile phones as a camera, radio, music player, and even an internet browser. As most Web pages were originally designed for desktop computers with large screens, viewing them on smaller displays involves a number of horizontal and vertical page scrolling. To save mobile Web search fatigue caused by repeated scrolling, we investigate the automatic Web page scrolling problem based on two observations. First, every web page has many different parts that do not have the equal importance to an end user, and the user is often interested in a certain part of the Web page. Second, the ease of use of text-entry in mobile phones compare to the desktop computers', users usually prefer to search the Web just once and get the needed answer. Compared to the existing efforts on page layout modification and content splitting for easy page navigation on mobile displays, we present a simple yet effective approach of automatic page scrolling for mobile Web search, while keeping the original Web page content keeps its integrity and hence, preventing any loss of information. We work with the Document Object Model (DOM) of the clicked page by user, compute the relevance of each paragraph of the Web page based on the tf*idf (term frequency*inverse document frequency) values of user's search keywords occurring in that paragraph. The focus of the browser will be automatically scrolled to the most relevant one. Our user study shows that the proposed approach can achieve 96.47% scrolling accuracy under one search keyword, and 94.78% under multiple search keywords, while the time spending in computing the most important part does not vary much from the number of search keywords. The users can save up to 1.5 sec in searching and finding the needed information compare to the best case of our user study.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131628022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, storage systems have observed a great leap in performance, reliability, endurance, and cost, due to the advance in non-volatile memory technologies, such as NAND flash memory. However, although delivering better performance, shock resistance, and energy efficiency than mechanical hard disks, NAND flash memory comes with unique characteristics and operational constraints, and cannot be directly used as an ideal block device. In particular, to address the notorious write-once property, garbage collection is necessary to clean the outdated data on flash memory. However, garbage collection is very time-consuming and often becomes the performance bottleneck of flash memory. Moreover, because flash memory cells endure very limited writes (as compared to mechanical hard disks) before they are worn out, the wear-leveling design is also indispensable to equalize the use of flash memory space and to prolong the flash memory lifetime. In response, this paper surveys state-of-the-art garbage collection and wear-leveling designs, so as to assist the design of flash memory management in various application scenarios. The future development trends of flash memory, such as the widespread adoption of higher-level flash memory and the emerging of three-dimensional (3D) flash memory architectures, are also discussed.
{"title":"Garbage collection and wear leveling for flash memory: Past and future","authors":"Ming-Chang Yang, Yu-Ming Chang, Che-Wei Tsao, Po-Chun Huang, Yuan-Hao Chang, Tei-Wei Kuo","doi":"10.1109/SMARTCOMP.2014.7043841","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043841","url":null,"abstract":"Recently, storage systems have observed a great leap in performance, reliability, endurance, and cost, due to the advance in non-volatile memory technologies, such as NAND flash memory. However, although delivering better performance, shock resistance, and energy efficiency than mechanical hard disks, NAND flash memory comes with unique characteristics and operational constraints, and cannot be directly used as an ideal block device. In particular, to address the notorious write-once property, garbage collection is necessary to clean the outdated data on flash memory. However, garbage collection is very time-consuming and often becomes the performance bottleneck of flash memory. Moreover, because flash memory cells endure very limited writes (as compared to mechanical hard disks) before they are worn out, the wear-leveling design is also indispensable to equalize the use of flash memory space and to prolong the flash memory lifetime. In response, this paper surveys state-of-the-art garbage collection and wear-leveling designs, so as to assist the design of flash memory management in various application scenarios. The future development trends of flash memory, such as the widespread adoption of higher-level flash memory and the emerging of three-dimensional (3D) flash memory architectures, are also discussed.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130401668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043850
Wenming Yang, T. Yuan, Fei Zhou, Q. Liao
In this paper, we present a new method to reconstruct a high-resolution (HR) face image from a low-resolution (LR) observation. Inspired by position-patch based face hallucination approach, we design position-based dictionaries to code image patches, and recovery HR patch using the coding coefficients as reconstruction weights. In order to capture nonlinear similarity of face features, we implicitly map the data into a high dimensional feature space. By applying kernel principal analysis (KPCA) on the mapped data in the high dimensional feature space, we can obtain reconstruction coefficients in a reduced subspace. Experimental results show that the proposed method can effectively reconstruct details of face images and outperform state-of-the-art algorithms in both quantitative and visual comparisons.
{"title":"Face hallucination via position-based dictionaries coding in kernel feature space","authors":"Wenming Yang, T. Yuan, Fei Zhou, Q. Liao","doi":"10.1109/SMARTCOMP.2014.7043850","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043850","url":null,"abstract":"In this paper, we present a new method to reconstruct a high-resolution (HR) face image from a low-resolution (LR) observation. Inspired by position-patch based face hallucination approach, we design position-based dictionaries to code image patches, and recovery HR patch using the coding coefficients as reconstruction weights. In order to capture nonlinear similarity of face features, we implicitly map the data into a high dimensional feature space. By applying kernel principal analysis (KPCA) on the mapped data in the high dimensional feature space, we can obtain reconstruction coefficients in a reduced subspace. Experimental results show that the proposed method can effectively reconstruct details of face images and outperform state-of-the-art algorithms in both quantitative and visual comparisons.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"53 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130551608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043872
Yadan Lv, Zhiyong Feng, Chao Xu
This paper mainly studies facial expression recognition with the components by face parsing (FP). Considering the disadvantage that different parts of face contain different amount of information for facial expression and the weighted function are not the same for different faces, an idea is proposed to recognize facial expression using components which are active in expression disclosure. The face parsing detectors are trained via deep belief network and tuned by logistic regression. The detectors first detect face, and then detect nose, eyes and mouth hierarchically. A deep architecture pretrained with stacked autoencoder is applied to facial expression recognition with the concentrated features of detected components. The parsing components remove the redundant information in expression recognition, and images don't need to be aligned or any other artificial treatment. Experimental results on the Japanese Female Facial Expression database and extended Cohn-Kanade dataset outperform other methods and show the effectiveness and robustness of this algorithm.
{"title":"Facial expression recognition via deep learning","authors":"Yadan Lv, Zhiyong Feng, Chao Xu","doi":"10.1109/SMARTCOMP.2014.7043872","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043872","url":null,"abstract":"This paper mainly studies facial expression recognition with the components by face parsing (FP). Considering the disadvantage that different parts of face contain different amount of information for facial expression and the weighted function are not the same for different faces, an idea is proposed to recognize facial expression using components which are active in expression disclosure. The face parsing detectors are trained via deep belief network and tuned by logistic regression. The detectors first detect face, and then detect nose, eyes and mouth hierarchically. A deep architecture pretrained with stacked autoencoder is applied to facial expression recognition with the concentrated features of detected components. The parsing components remove the redundant information in expression recognition, and images don't need to be aligned or any other artificial treatment. Experimental results on the Japanese Female Facial Expression database and extended Cohn-Kanade dataset outperform other methods and show the effectiveness and robustness of this algorithm.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126234003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}