Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043835
Weiping Zhu, Xiaohui Cui, Cheng Hu, Chao Ma
With the advance of RFID technology and pervasive computing, a growing number of RFID devices are deployed in the surrounding environment and form large-scale RFID systems. Many applications run on top of such a system, and perform diverse and possibly conflicting data collection tasks. Existing works about RFID data collection either focus on deducing events of interest from primitive data, or scheduling the activation of readers to mitigate various of interference. The former ones assume that the primitive data have been collected already, and the later ones assume that all the readers belong to a single application whose objective is to read all the tags once. It lacks an effective way to specify the constraints in the process of data collection for multiple applications, and coordinate the readers to meet such requirements. In this paper, we proposed a specification language and a reader coordination algorithm to solve this problem. Our language can be used to specify complex constraints in data collection tasks, based on attribute selection, set relations, and temporal relations. And then a permission based data collection approach is developed for the readers to meet these constraints in a distributed way. Extensive simulation results show that the proposed approach outperforms existing approaches in terms of the execution time.
{"title":"Complex data collection in large-scale RFID systems","authors":"Weiping Zhu, Xiaohui Cui, Cheng Hu, Chao Ma","doi":"10.1109/SMARTCOMP.2014.7043835","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043835","url":null,"abstract":"With the advance of RFID technology and pervasive computing, a growing number of RFID devices are deployed in the surrounding environment and form large-scale RFID systems. Many applications run on top of such a system, and perform diverse and possibly conflicting data collection tasks. Existing works about RFID data collection either focus on deducing events of interest from primitive data, or scheduling the activation of readers to mitigate various of interference. The former ones assume that the primitive data have been collected already, and the later ones assume that all the readers belong to a single application whose objective is to read all the tags once. It lacks an effective way to specify the constraints in the process of data collection for multiple applications, and coordinate the readers to meet such requirements. In this paper, we proposed a specification language and a reader coordination algorithm to solve this problem. Our language can be used to specify complex constraints in data collection tasks, based on attribute selection, set relations, and temporal relations. And then a permission based data collection approach is developed for the readers to meet these constraints in a distributed way. Extensive simulation results show that the proposed approach outperforms existing approaches in terms of the execution time.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132841330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043845
Ha Huy Cuong Nguyen, Van Son Le, Thanh Thuy Nguyen
An allocation of resources to a virtual machine specifies the maximum amount of each individual element of each resource type that will be utilized, as well as the aggregate amount of each resource of each type. An allocation is thus represented by two vectors, a maximum elementary allocation vector and an aggregate allocation vector. There are more general types of resource allocation problems than those we consider here. In this paper, we present an approach for improving parallel deadlock detection algorithm, to schedule the policies of resource which supply for resource allocation in heterogeneous distributed platform. Parallel deadlock detection algorithm has a run time complexity of O(min(m,n)), where m is the number of resources and n is the number of processes. We propose the algorithm for allocating multiple resources to competing services running in virtual machines on a heterogeneous distributed platform. The experiments also compare the performance of the proposed approach with other related work.
{"title":"Algorithmic approach to deadlock detection for resource allocation in heterogeneous platforms","authors":"Ha Huy Cuong Nguyen, Van Son Le, Thanh Thuy Nguyen","doi":"10.1109/SMARTCOMP.2014.7043845","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043845","url":null,"abstract":"An allocation of resources to a virtual machine specifies the maximum amount of each individual element of each resource type that will be utilized, as well as the aggregate amount of each resource of each type. An allocation is thus represented by two vectors, a maximum elementary allocation vector and an aggregate allocation vector. There are more general types of resource allocation problems than those we consider here. In this paper, we present an approach for improving parallel deadlock detection algorithm, to schedule the policies of resource which supply for resource allocation in heterogeneous distributed platform. Parallel deadlock detection algorithm has a run time complexity of O(min(m,n)), where m is the number of resources and n is the number of processes. We propose the algorithm for allocating multiple resources to competing services running in virtual machines on a heterogeneous distributed platform. The experiments also compare the performance of the proposed approach with other related work.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121324442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043849
Yunfan Liu, Xueshi Hou, Jiansheng Chen, Chang Yang, G. Su, W. Dou
Facial expression recognition has important practical applications. In this paper, we propose a method based on the combination of optical flow and a deep neural network - stacked sparse autoencoder (SAE). This method classifies facial expressions into six categories (i.e. happiness, sadness, anger, fear, disgust and surprise). In order to extract the representation of facial expressions, we choose the optical flow method because it could analyze video image sequences effectively and reduce the influence of personal appearance difference on facial expression recognition. Then, we train the stacked SAE with the optical flow field as the input to extract high-level features. To achieve classification, we apply a softmax classifier on the top layer of the stacked SAE. This method is applied to the Extended Cohn-Kanade Dataset (CK+). The expression classification result shows that the SAE performances the classification effectively and successfully. Further experiments (transformation and purification) are carried out to illustrate the application of the feature extraction and input reconstruction ability of SAE.
{"title":"Facial expression recognition and generation using sparse autoencoder","authors":"Yunfan Liu, Xueshi Hou, Jiansheng Chen, Chang Yang, G. Su, W. Dou","doi":"10.1109/SMARTCOMP.2014.7043849","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043849","url":null,"abstract":"Facial expression recognition has important practical applications. In this paper, we propose a method based on the combination of optical flow and a deep neural network - stacked sparse autoencoder (SAE). This method classifies facial expressions into six categories (i.e. happiness, sadness, anger, fear, disgust and surprise). In order to extract the representation of facial expressions, we choose the optical flow method because it could analyze video image sequences effectively and reduce the influence of personal appearance difference on facial expression recognition. Then, we train the stacked SAE with the optical flow field as the input to extract high-level features. To achieve classification, we apply a softmax classifier on the top layer of the stacked SAE. This method is applied to the Extended Cohn-Kanade Dataset (CK+). The expression classification result shows that the SAE performances the classification effectively and successfully. Further experiments (transformation and purification) are carried out to illustrate the application of the feature extraction and input reconstruction ability of SAE.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"434 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116009856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043861
Jie Wang, Rui Wang, Su Zhang, Jing Ding, Yuemin Zhu
Idiopathic generalized epilepsy (IGE) and symptomatic generalized epilepsy (SGE) are two kinds of generalized epilepsy. In this study, we discussed the methods of automatically segmentation of MR images for patients with these two kinds of epilepsy. K-Means clustering, expectation-maximization, and fuzzy c-means algorithms were employed to perform segmentation on brain images for patients with IGE. For patients with SGE, a trimmed likelihood estimator combined with Gaussian mixture model, which we improved based on other's existing work, was employed to detect obvious brain lesions on fluid-attenuated inversion recovery images. Gray matter, white matter, and cerebrospinal fluid were then segmented from the remaining normal brain part. Similarity metrics were used to evaluate the performance of the different segmentation methods. The Dice similarity coefficient of the segmentation results exceeded 70% and satisfied the basic clinical requirement. Actually, the segmentation results were acceptable to clinicians and can provide clinicians more disease information to diagnose and treat epilepsy.
{"title":"Automatic segmentation of brain MR images for patients with different kinds of epilepsy","authors":"Jie Wang, Rui Wang, Su Zhang, Jing Ding, Yuemin Zhu","doi":"10.1109/SMARTCOMP.2014.7043861","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043861","url":null,"abstract":"Idiopathic generalized epilepsy (IGE) and symptomatic generalized epilepsy (SGE) are two kinds of generalized epilepsy. In this study, we discussed the methods of automatically segmentation of MR images for patients with these two kinds of epilepsy. K-Means clustering, expectation-maximization, and fuzzy c-means algorithms were employed to perform segmentation on brain images for patients with IGE. For patients with SGE, a trimmed likelihood estimator combined with Gaussian mixture model, which we improved based on other's existing work, was employed to detect obvious brain lesions on fluid-attenuated inversion recovery images. Gray matter, white matter, and cerebrospinal fluid were then segmented from the remaining normal brain part. Similarity metrics were used to evaluate the performance of the different segmentation methods. The Dice similarity coefficient of the segmentation results exceeded 70% and satisfied the basic clinical requirement. Actually, the segmentation results were acceptable to clinicians and can provide clinicians more disease information to diagnose and treat epilepsy.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125110649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043870
Phattaradanai Kiratiwudhikul, Pornchai Chanyagorn
Pre-term infants - less than 37 weeks gestational age - usually had immature lungs' development, which resulted of poor oxygen saturation in red blood cells. A blood oxygen saturation level was measured in percent of Peripheral capillary oxygen saturation (SpO2). Medical doctors needed to order an oxygen therapy to maintain SpO2 of the infants between 90-95% while SpO2 of normal infants is 99-100%. Oxygen therapy was a procedure to stimulate lung functions and to maintain life. A registered nurse (RN) was responsible for adjusting levels of a fractional of inspired oxygen (FiO2) from 21% to 100% which was a proportion of oxygen gas provided to the infants periodically. In real situation, the adjustment could only be made as often as every 20-30 minutes, which might not be adequate. This caused ineffectiveness of an oxygen therapy and result in a longer hospital stay. A critical error of this adjustment could also cause blindness due to oxygen toxicity or dead due to hypoxia. This research was to develop a reliable embedded system that allowed automatically control of FiO2 according to an order of SpO2 by medical doctors. As a result, risks of oxygen toxicity and hypoxia could be minimized. The system also allowed medical doctors to use recorded data for future care planning in oxygen therapy.
{"title":"Gas mixture control system for oxygen therapy in pre-term infants","authors":"Phattaradanai Kiratiwudhikul, Pornchai Chanyagorn","doi":"10.1109/SMARTCOMP.2014.7043870","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043870","url":null,"abstract":"Pre-term infants - less than 37 weeks gestational age - usually had immature lungs' development, which resulted of poor oxygen saturation in red blood cells. A blood oxygen saturation level was measured in percent of Peripheral capillary oxygen saturation (SpO2). Medical doctors needed to order an oxygen therapy to maintain SpO2 of the infants between 90-95% while SpO2 of normal infants is 99-100%. Oxygen therapy was a procedure to stimulate lung functions and to maintain life. A registered nurse (RN) was responsible for adjusting levels of a fractional of inspired oxygen (FiO2) from 21% to 100% which was a proportion of oxygen gas provided to the infants periodically. In real situation, the adjustment could only be made as often as every 20-30 minutes, which might not be adequate. This caused ineffectiveness of an oxygen therapy and result in a longer hospital stay. A critical error of this adjustment could also cause blindness due to oxygen toxicity or dead due to hypoxia. This research was to develop a reliable embedded system that allowed automatically control of FiO2 according to an order of SpO2 by medical doctors. As a result, risks of oxygen toxicity and hypoxia could be minimized. The system also allowed medical doctors to use recorded data for future care planning in oxygen therapy.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114360059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043863
Antorweep Chakravorty, Chunming Rong, P. Evensen, T. Wlodarczyk
The adaptation of new technologies into the electrical energy infrastructure enables development of novel energy efficiency services. Introduction of smart meters into residential households allows collection of granular energy usage measures at frequent intervals. Analysis of such data could bring ample and detailed insights into the consumption behavior of households, allowing more accurate prediction of future loads. With the data intensive nature of these technologies, recent big data solutions allows harnessing of the enormous amounts of data being generated. We present a novel, scalable, distributed gaussian mean clustering algorithm for analyzing the energy consumption behavior of households in relation to different contributing factors such as weather conditions, type of day and time of the day. Based on forecasts of such contributing factors, we were able to predict a household's future energy usage much more accurately than other standard regression methods used for load forecasting.
{"title":"A distributed gaussian-means clustering algorithm for forecasting domestic energy usage","authors":"Antorweep Chakravorty, Chunming Rong, P. Evensen, T. Wlodarczyk","doi":"10.1109/SMARTCOMP.2014.7043863","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043863","url":null,"abstract":"The adaptation of new technologies into the electrical energy infrastructure enables development of novel energy efficiency services. Introduction of smart meters into residential households allows collection of granular energy usage measures at frequent intervals. Analysis of such data could bring ample and detailed insights into the consumption behavior of households, allowing more accurate prediction of future loads. With the data intensive nature of these technologies, recent big data solutions allows harnessing of the enormous amounts of data being generated. We present a novel, scalable, distributed gaussian mean clustering algorithm for analyzing the energy consumption behavior of households in relation to different contributing factors such as weather conditions, type of day and time of the day. Based on forecasts of such contributing factors, we were able to predict a household's future energy usage much more accurately than other standard regression methods used for load forecasting.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043858
M. A. U. Alam, Nirmalya Roy
To promote independent living for elderly population activity recognition based approaches have been investigated deeply to infer the activities of daily living (ADLs) and instrumental activities of daily living (I-ADLs). Deriving and integrating the gestural activities (such as talking, coughing, and deglutition etc.) along with activity recognition approaches can not only help identify the daily activities or social interaction of the older adults but also provide unique insights into their long-term health care, wellness management and ambulatory conditions. Gestural activities (GAs), in general, help identify fine-grained physiological symptoms and chronic psychological conditions which are not directly observable from traditional activities of daily living. In this paper, we propose GeSmart, an energy efficient wearable smart earring based GA recognition model for detecting a combination of speech and non-speech events. To capture the GAs we propose to use only the accelerometer sensor inside our smart earring due to its energy efficient operations and ubiquitous presence in everyday wearable devices. We present initial results and insights based on a C4.5 classification algorithm to infer the infrequent GAs. Subsequently, we propose a novel change-point detection based hybrid classification method exploiting the emerging patterns in a variety of GAs to detect and infer infrequent GAs. Experimental results based on real data traces collected from 10 users demonstrate that this approach improves the accuracy of GAs classification by over 23%, compared to previously proposed pure classification-based solutions. We also note that the accelerometer sensor based earrings are surprisingly informative and energy efficient (by 2.3 times) for identifying different types of GAs.
{"title":"GeSmart: A gestural activity recognition model for predicting behavioral health","authors":"M. A. U. Alam, Nirmalya Roy","doi":"10.1109/SMARTCOMP.2014.7043858","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043858","url":null,"abstract":"To promote independent living for elderly population activity recognition based approaches have been investigated deeply to infer the activities of daily living (ADLs) and instrumental activities of daily living (I-ADLs). Deriving and integrating the gestural activities (such as talking, coughing, and deglutition etc.) along with activity recognition approaches can not only help identify the daily activities or social interaction of the older adults but also provide unique insights into their long-term health care, wellness management and ambulatory conditions. Gestural activities (GAs), in general, help identify fine-grained physiological symptoms and chronic psychological conditions which are not directly observable from traditional activities of daily living. In this paper, we propose GeSmart, an energy efficient wearable smart earring based GA recognition model for detecting a combination of speech and non-speech events. To capture the GAs we propose to use only the accelerometer sensor inside our smart earring due to its energy efficient operations and ubiquitous presence in everyday wearable devices. We present initial results and insights based on a C4.5 classification algorithm to infer the infrequent GAs. Subsequently, we propose a novel change-point detection based hybrid classification method exploiting the emerging patterns in a variety of GAs to detect and infer infrequent GAs. Experimental results based on real data traces collected from 10 users demonstrate that this approach improves the accuracy of GAs classification by over 23%, compared to previously proposed pure classification-based solutions. We also note that the accelerometer sensor based earrings are surprisingly informative and energy efficient (by 2.3 times) for identifying different types of GAs.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126836417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043855
Adnan Khan, S. Imon, Sajal K. Das
Participatory sensing is an approach to data collection for monitoring different scenarios with the help of smartphone sensors. As more and more sensors are being added to smartphones, monitoring a wide range of scenarios has become possible with participatory sensing. An important issue in such participatory sensing application is the coverage of the collected data that reflects how well the data samples represent the monitored area. In the traditional approach, the data collection process is assisted by a server that knows the location of the participating devices and selects the necessary ones to cover the monitored area efficiently. However, for battery powered devices like smartphones, sending frequent location updates to the server is quite energy expensive. In this paper, we propose a framework, called STREET, for data collection from urban streets that can address the coverage problem where a participating mobile device is not required to send location updates to the server. In particular, our framework can collect data samples to ensure the requirements of a specified partial coverage, full coverage and k-coverage. STREET is assisted by a simple localization scheme for mobile devices that minimizes the usage of location sensor (e.g., GPS) while participating in the data collection process. Experiments from simulation studies show that our approach can significantly reduce energy consumption of the participating mobile devices.
{"title":"Ensuring energy efficient coverage for participatory sensing in urban streets","authors":"Adnan Khan, S. Imon, Sajal K. Das","doi":"10.1109/SMARTCOMP.2014.7043855","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043855","url":null,"abstract":"Participatory sensing is an approach to data collection for monitoring different scenarios with the help of smartphone sensors. As more and more sensors are being added to smartphones, monitoring a wide range of scenarios has become possible with participatory sensing. An important issue in such participatory sensing application is the coverage of the collected data that reflects how well the data samples represent the monitored area. In the traditional approach, the data collection process is assisted by a server that knows the location of the participating devices and selects the necessary ones to cover the monitored area efficiently. However, for battery powered devices like smartphones, sending frequent location updates to the server is quite energy expensive. In this paper, we propose a framework, called STREET, for data collection from urban streets that can address the coverage problem where a participating mobile device is not required to send location updates to the server. In particular, our framework can collect data samples to ensure the requirements of a specified partial coverage, full coverage and k-coverage. STREET is assisted by a simple localization scheme for mobile devices that minimizes the usage of location sensor (e.g., GPS) while participating in the data collection process. Experiments from simulation studies show that our approach can significantly reduce energy consumption of the participating mobile devices.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124556061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043852
Chenhui Li, G. Baciu, Yu Han
Visualization of high density streaming points has become a challenge in information exploration. In this paper, we present a new pipeline for the interactive visualization of large points set. The pipeline is based on the idea that heat-map can overcome the overlapping problem in visualization of high density streaming points. Thus, we firstly define a regular streaming format for large point set which can be updated or changed continually. Based on streaming points, we use kernel density estimation to estimate the point distribution and visualize the density image. Perceptive and interactive features are also considered in our visualization. To our knowledge, our pipeline is the first work that focuses on perceptive visualization of high density streaming points. The main step of our pipeline is accelerated via GPU rendering in order to make scene of real-time interaction in visualization. We demonstrate the visual effectiveness of our pipeline on a geographical dataset of high-density streaming points.
{"title":"Interactive visualization of high density streaming points with heat-map","authors":"Chenhui Li, G. Baciu, Yu Han","doi":"10.1109/SMARTCOMP.2014.7043852","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043852","url":null,"abstract":"Visualization of high density streaming points has become a challenge in information exploration. In this paper, we present a new pipeline for the interactive visualization of large points set. The pipeline is based on the idea that heat-map can overcome the overlapping problem in visualization of high density streaming points. Thus, we firstly define a regular streaming format for large point set which can be updated or changed continually. Based on streaming points, we use kernel density estimation to estimate the point distribution and visualize the density image. Perceptive and interactive features are also considered in our visualization. To our knowledge, our pipeline is the first work that focuses on perceptive visualization of high density streaming points. The main step of our pipeline is accelerated via GPU rendering in order to make scene of real-time interaction in visualization. We demonstrate the visual effectiveness of our pipeline on a geographical dataset of high-density streaming points.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122213655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/SMARTCOMP.2014.7043838
Min Liu, Guangtao Zhai, Ke Gu, Xiaokang Yang
In this paper, we present a new algorithm for blind/no-reference image quality assessment (BIQA/NR-IQA). Most existing measures are “opinion-aware”, demanding human opinion scored images to map image features to them. The task of obtaining human scores of images is, however, commonly thought to be uneconomical, and thus we focus on “opinion free” (OF) quality metrics in this research. By integrating local and global features, this paper develops a learning-based BIQA approach with three steps by combining local and global features together. In the first step of extracting local features, we use the quality aware clustering with the centroid of each quality level trained by K-means, while we in the second step compute the global features based on the natural scene statistics. Finally, the third step uses the SVR to train a regression module from the above-mentioned local and global features to derive the overall image quality score. Experimental results on LIVE, TID2008, CSIQ, and TID2013 databases validate the effectiveness of our proposed metric (a general framework) as compared to popular no-, reduced- and full-reference IQA approaches.
{"title":"Learning to integrate local and global features for a blind image quality measure","authors":"Min Liu, Guangtao Zhai, Ke Gu, Xiaokang Yang","doi":"10.1109/SMARTCOMP.2014.7043838","DOIUrl":"https://doi.org/10.1109/SMARTCOMP.2014.7043838","url":null,"abstract":"In this paper, we present a new algorithm for blind/no-reference image quality assessment (BIQA/NR-IQA). Most existing measures are “opinion-aware”, demanding human opinion scored images to map image features to them. The task of obtaining human scores of images is, however, commonly thought to be uneconomical, and thus we focus on “opinion free” (OF) quality metrics in this research. By integrating local and global features, this paper develops a learning-based BIQA approach with three steps by combining local and global features together. In the first step of extracting local features, we use the quality aware clustering with the centroid of each quality level trained by K-means, while we in the second step compute the global features based on the natural scene statistics. Finally, the third step uses the SVR to train a regression module from the above-mentioned local and global features to derive the overall image quality score. Experimental results on LIVE, TID2008, CSIQ, and TID2013 databases validate the effectiveness of our proposed metric (a general framework) as compared to popular no-, reduced- and full-reference IQA approaches.","PeriodicalId":169858,"journal":{"name":"2014 International Conference on Smart Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129380610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}