Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687592
Jianwei Zhang, Yanfang Song
In laboratory the velocity signal of Impinging Stream Mixer (ISM) was measured with Laser Doppler Anemometer. The velocity signal contains a lot of complicated information. Multiscale entropy (MSE) algorithm, which provides a way to measure complexity over a range of scales, was first used to analysis the complexity of the velocity time series in ISM. According to the MSE theory, draw up Matlab procedures for scientific computing. It is obtained that the choice of parameter has a strong influence on MSE by comparing MSE under different embedding dimension m and tolerance r. It is shown that the axial flow is in state of being dominant by computing MSE under different sections. To distinguish the velocity signal, MSE under different rotating speed were computed. The results indicate that MSE increases with the augment of the rotating speed, and the complexity increases homogeneously.
{"title":"Multiscale entropy analysis of the velocity signal in Impinging Stream Mixer","authors":"Jianwei Zhang, Yanfang Song","doi":"10.1109/PIC.2010.5687592","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687592","url":null,"abstract":"In laboratory the velocity signal of Impinging Stream Mixer (ISM) was measured with Laser Doppler Anemometer. The velocity signal contains a lot of complicated information. Multiscale entropy (MSE) algorithm, which provides a way to measure complexity over a range of scales, was first used to analysis the complexity of the velocity time series in ISM. According to the MSE theory, draw up Matlab procedures for scientific computing. It is obtained that the choice of parameter has a strong influence on MSE by comparing MSE under different embedding dimension m and tolerance r. It is shown that the axial flow is in state of being dominant by computing MSE under different sections. To distinguish the velocity signal, MSE under different rotating speed were computed. The results indicate that MSE increases with the augment of the rotating speed, and the complexity increases homogeneously.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132945894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687859
Huilin Liu, Cheng Chen, Liwei Zhang, Guoren Wang
With the rapid development of new media, such as computer and Internet, extract valuable entity attribute information from Web text can be significant. Aiming at this problem, this paper puts forward SALmap, this model calls seed method at first, which will create common candidate attribute label sets by defining data format rules. Then we construct the mapping relationship between the attributes and the labels using attribute value information and the maximum entropy model, and label the entity instance as well. Finally, hidden Markov model is applied to the relevant entity attribute extraction. Experiments prove SALmap model can significantly improve the precision and performance of entity attribute extraction.
{"title":"The research of label-mapping-based entity attribute extraction","authors":"Huilin Liu, Cheng Chen, Liwei Zhang, Guoren Wang","doi":"10.1109/PIC.2010.5687859","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687859","url":null,"abstract":"With the rapid development of new media, such as computer and Internet, extract valuable entity attribute information from Web text can be significant. Aiming at this problem, this paper puts forward SALmap, this model calls seed method at first, which will create common candidate attribute label sets by defining data format rules. Then we construct the mapping relationship between the attributes and the labels using attribute value information and the maximum entropy model, and label the entity instance as well. Finally, hidden Markov model is applied to the relevant entity attribute extraction. Experiments prove SALmap model can significantly improve the precision and performance of entity attribute extraction.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133235435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687912
M. Rasooli, S. Ghofrani, A. Ahmadi
In this paper, a safe and powerful method is presented which can detect and identify Farsi license plate irrespective of image contrast, lack of clarity, distance cars, and camera rotation. In addition, more than one car can be existed in image. The proposed method extracts edges and then determines the candidate regions by using adaptive image enhancement and applied a window movement. Finally by region elements analysis, the license plates are detected. The region elements analysis is working according to the plate geometric structure, continuity and parallelism. The algorithm has been run on the proposed database which includes 300 images and the obtained average accuracy is considerable.
{"title":"Farsi license plate detection based on element analysis in complex images","authors":"M. Rasooli, S. Ghofrani, A. Ahmadi","doi":"10.1109/PIC.2010.5687912","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687912","url":null,"abstract":"In this paper, a safe and powerful method is presented which can detect and identify Farsi license plate irrespective of image contrast, lack of clarity, distance cars, and camera rotation. In addition, more than one car can be existed in image. The proposed method extracts edges and then determines the candidate regions by using adaptive image enhancement and applied a window movement. Finally by region elements analysis, the license plates are detected. The region elements analysis is working according to the plate geometric structure, continuity and parallelism. The algorithm has been run on the proposed database which includes 300 images and the obtained average accuracy is considerable.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687407
Zhihua Wei, D. Miao, Rui Zhao, Chen Xie, Zhifei Zhang
Search engine has played an important role in information society. However, it is not very easy to find interest information from too much returned search results. Web search visualization system aims at helping users to locate interest documents rapidly from a great amount of returned search results. This paper explores visualization of Web search results based on multi-label text classification method. It conducts a multi-label classification process on the results from search engine. In this framework, users could browse interest information according to category label added by our algorithm. A paralleled Naïve Bayes multi-label classification algorithm is proposed for this application. A two-step feature selection algorithm is constructed to reduce the effect on Naïve Bayes classifier resulted from feature correlation and feature redundancy. A prototype system, named TJ-MLWC, is developed, which has the function of browsing search results by one or several categories.
{"title":"Visualizing search results based on multi-label classification","authors":"Zhihua Wei, D. Miao, Rui Zhao, Chen Xie, Zhifei Zhang","doi":"10.1109/PIC.2010.5687407","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687407","url":null,"abstract":"Search engine has played an important role in information society. However, it is not very easy to find interest information from too much returned search results. Web search visualization system aims at helping users to locate interest documents rapidly from a great amount of returned search results. This paper explores visualization of Web search results based on multi-label text classification method. It conducts a multi-label classification process on the results from search engine. In this framework, users could browse interest information according to category label added by our algorithm. A paralleled Naïve Bayes multi-label classification algorithm is proposed for this application. A two-step feature selection algorithm is constructed to reduce the effect on Naïve Bayes classifier resulted from feature correlation and feature redundancy. A prototype system, named TJ-MLWC, is developed, which has the function of browsing search results by one or several categories.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133760986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687866
Lina Zhao, L. Guan
MP4 is a popular kind of multimedia container format. Most of multimedia players can play MP4 files. Since multimedia player is an embedded system with limited CPU and memory, it is very important to improve the efficiency of parsing MP4 metadata. In this paper we provide an optimized method for parsing metadata. The experimental result shows that the optimized method can obviously lower down the CPU load and save the memory utilization. We also provide an implementation method for converting media data in MP4 files to audio and video streams, which the decoder of multimedia players is able to play.
{"title":"An optimized method and implementation for parsing MP4 metadata","authors":"Lina Zhao, L. Guan","doi":"10.1109/PIC.2010.5687866","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687866","url":null,"abstract":"MP4 is a popular kind of multimedia container format. Most of multimedia players can play MP4 files. Since multimedia player is an embedded system with limited CPU and memory, it is very important to improve the efficiency of parsing MP4 metadata. In this paper we provide an optimized method for parsing metadata. The experimental result shows that the optimized method can obviously lower down the CPU load and save the memory utilization. We also provide an implementation method for converting media data in MP4 files to audio and video streams, which the decoder of multimedia players is able to play.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5688009
Xiang Qiu, Li Zhang, Xiaoli Lian
Exception handling mechanism in modern programming languages is frequently used to build robust systems. However, it presents more daze for software developers because of exception propagation. Centering on the question: “For raising exception, how to identify where handles the exception?” we analyze the dependency between exception propagation and method call. Then associating the method with exception types by the relationship of throw (declared explicitly in method signature) or catch, this paper builds the Software Extended Dependency Graph and proposes a static exception propagation path extraction algorithm, so we can analyze exception propagation hops, the exception hierarchy and exception propagation boundary.
{"title":"Static analysis for java exception propagation structure","authors":"Xiang Qiu, Li Zhang, Xiaoli Lian","doi":"10.1109/PIC.2010.5688009","DOIUrl":"https://doi.org/10.1109/PIC.2010.5688009","url":null,"abstract":"Exception handling mechanism in modern programming languages is frequently used to build robust systems. However, it presents more daze for software developers because of exception propagation. Centering on the question: “For raising exception, how to identify where handles the exception?” we analyze the dependency between exception propagation and method call. Then associating the method with exception types by the relationship of throw (declared explicitly in method signature) or catch, this paper builds the Software Extended Dependency Graph and proposes a static exception propagation path extraction algorithm, so we can analyze exception propagation hops, the exception hierarchy and exception propagation boundary.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114247818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687956
Binxiang Liu, Hailin Li, Xiang Cheng
Method of information process based on test mining and segmentation technology is provided, which uses the relative principles in the filed of test mining and word segmentation to preprocess and transform the information from internet. The problem of large data dimension caused by the redundancy characters extracted from the test is settled and it improves the performance of the CE arithmetic. Finally, the results are showed and the burden of the CPU and memory of the server which is brought by the of the ceramic company testify that this method have the characters of intelligent behavior, precise result, beautiful interface, easy operation and synchronization of running.
{"title":"Method of information process based on test mining and word segmentation","authors":"Binxiang Liu, Hailin Li, Xiang Cheng","doi":"10.1109/PIC.2010.5687956","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687956","url":null,"abstract":"Method of information process based on test mining and segmentation technology is provided, which uses the relative principles in the filed of test mining and word segmentation to preprocess and transform the information from internet. The problem of large data dimension caused by the redundancy characters extracted from the test is settled and it improves the performance of the CE arithmetic. Finally, the results are showed and the burden of the CPU and memory of the server which is brought by the of the ceramic company testify that this method have the characters of intelligent behavior, precise result, beautiful interface, easy operation and synchronization of running.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687891
Bao Li, Wei Jiang, Zhi-Quan Cheng, Gang Dang, Shiyao Jin
We introduce a novel method for the consolidation of unorganized point clouds with noise, outliers, non-uniformities as well as sharp features. This method is feature preserving, in the sense that given an initial estimation of normal, it is able to recover the sharp features contained in the original geometric data which are usually contaminated during the acquisition. The key ingredient of our approach is a weighting term from normal space as an effective complement to the recently proposed consolidation techniques. Moreover, a normal mollification step is employed during the consolidation to get normal information respecting sharp features besides the position of each point. Experiments on both synthetic and real-world scanned models validate the ability of our approach in producing denoised, evenly distributed and feature preserving point clouds, which are preferred by most surface reconstruction methods.
{"title":"Feature preserving consolidation for unorganized point clouds","authors":"Bao Li, Wei Jiang, Zhi-Quan Cheng, Gang Dang, Shiyao Jin","doi":"10.1109/PIC.2010.5687891","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687891","url":null,"abstract":"We introduce a novel method for the consolidation of unorganized point clouds with noise, outliers, non-uniformities as well as sharp features. This method is feature preserving, in the sense that given an initial estimation of normal, it is able to recover the sharp features contained in the original geometric data which are usually contaminated during the acquisition. The key ingredient of our approach is a weighting term from normal space as an effective complement to the recently proposed consolidation techniques. Moreover, a normal mollification step is employed during the consolidation to get normal information respecting sharp features besides the position of each point. Experiments on both synthetic and real-world scanned models validate the ability of our approach in producing denoised, evenly distributed and feature preserving point clouds, which are preferred by most surface reconstruction methods.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687400
Qiang-rong Jiang, Qianqian Lu
Face recognition involves computer recognition of personal identity based on geometric or statistical features derived from face images. Even though humans can detect and identify faces in a scene with little or no effort, getting a computer to accomplish such objectives is very challenging. Researchers have been always investigating simple, accurate, and convenient approach to achieve face recognition. In this paper, we propose a novel multiple kernels method, which is based on cycle kernel and gray kernels. Experimental results indicate that multiple kernels method gets good performance.
{"title":"Face recognition based on cycle kernel and gray kernels","authors":"Qiang-rong Jiang, Qianqian Lu","doi":"10.1109/PIC.2010.5687400","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687400","url":null,"abstract":"Face recognition involves computer recognition of personal identity based on geometric or statistical features derived from face images. Even though humans can detect and identify faces in a scene with little or no effort, getting a computer to accomplish such objectives is very challenging. Researchers have been always investigating simple, accurate, and convenient approach to achieve face recognition. In this paper, we propose a novel multiple kernels method, which is based on cycle kernel and gray kernels. Experimental results indicate that multiple kernels method gets good performance.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124842012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687444
J. Zhang, Kanyu Zhang
Tabu search algorithm has been applied to solve the optimal load distribution strategy problem for the cooling system constituted by multiple chiller water units, which has the characteristic such as complexity, constraint, nonlinearity, modeling difficulty, etc. The tabu search algorithms based on the neighborhood search can avoid the local optimization avoidance and has the artificial intelligence memory mechanism. In this paper, two chiller water units connected in parallel working using the tabu algorithm was observed. Compared with the conventional method, the results indicated that the tabu search algorithms has much less power consumption and is very suitable for application in air condition system operation.
{"title":"Application of tabu search heuristic algorithms for the purpose of energy saving in optimal load distribution strategy for multiple chiller water units","authors":"J. Zhang, Kanyu Zhang","doi":"10.1109/PIC.2010.5687444","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687444","url":null,"abstract":"Tabu search algorithm has been applied to solve the optimal load distribution strategy problem for the cooling system constituted by multiple chiller water units, which has the characteristic such as complexity, constraint, nonlinearity, modeling difficulty, etc. The tabu search algorithms based on the neighborhood search can avoid the local optimization avoidance and has the artificial intelligence memory mechanism. In this paper, two chiller water units connected in parallel working using the tabu algorithm was observed. Compared with the conventional method, the results indicated that the tabu search algorithms has much less power consumption and is very suitable for application in air condition system operation.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123665081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}