Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687592
Jianwei Zhang, Yanfang Song
In laboratory the velocity signal of Impinging Stream Mixer (ISM) was measured with Laser Doppler Anemometer. The velocity signal contains a lot of complicated information. Multiscale entropy (MSE) algorithm, which provides a way to measure complexity over a range of scales, was first used to analysis the complexity of the velocity time series in ISM. According to the MSE theory, draw up Matlab procedures for scientific computing. It is obtained that the choice of parameter has a strong influence on MSE by comparing MSE under different embedding dimension m and tolerance r. It is shown that the axial flow is in state of being dominant by computing MSE under different sections. To distinguish the velocity signal, MSE under different rotating speed were computed. The results indicate that MSE increases with the augment of the rotating speed, and the complexity increases homogeneously.
{"title":"Multiscale entropy analysis of the velocity signal in Impinging Stream Mixer","authors":"Jianwei Zhang, Yanfang Song","doi":"10.1109/PIC.2010.5687592","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687592","url":null,"abstract":"In laboratory the velocity signal of Impinging Stream Mixer (ISM) was measured with Laser Doppler Anemometer. The velocity signal contains a lot of complicated information. Multiscale entropy (MSE) algorithm, which provides a way to measure complexity over a range of scales, was first used to analysis the complexity of the velocity time series in ISM. According to the MSE theory, draw up Matlab procedures for scientific computing. It is obtained that the choice of parameter has a strong influence on MSE by comparing MSE under different embedding dimension m and tolerance r. It is shown that the axial flow is in state of being dominant by computing MSE under different sections. To distinguish the velocity signal, MSE under different rotating speed were computed. The results indicate that MSE increases with the augment of the rotating speed, and the complexity increases homogeneously.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132945894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687859
Huilin Liu, Cheng Chen, Liwei Zhang, Guoren Wang
With the rapid development of new media, such as computer and Internet, extract valuable entity attribute information from Web text can be significant. Aiming at this problem, this paper puts forward SALmap, this model calls seed method at first, which will create common candidate attribute label sets by defining data format rules. Then we construct the mapping relationship between the attributes and the labels using attribute value information and the maximum entropy model, and label the entity instance as well. Finally, hidden Markov model is applied to the relevant entity attribute extraction. Experiments prove SALmap model can significantly improve the precision and performance of entity attribute extraction.
{"title":"The research of label-mapping-based entity attribute extraction","authors":"Huilin Liu, Cheng Chen, Liwei Zhang, Guoren Wang","doi":"10.1109/PIC.2010.5687859","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687859","url":null,"abstract":"With the rapid development of new media, such as computer and Internet, extract valuable entity attribute information from Web text can be significant. Aiming at this problem, this paper puts forward SALmap, this model calls seed method at first, which will create common candidate attribute label sets by defining data format rules. Then we construct the mapping relationship between the attributes and the labels using attribute value information and the maximum entropy model, and label the entity instance as well. Finally, hidden Markov model is applied to the relevant entity attribute extraction. Experiments prove SALmap model can significantly improve the precision and performance of entity attribute extraction.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133235435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687912
M. Rasooli, S. Ghofrani, A. Ahmadi
In this paper, a safe and powerful method is presented which can detect and identify Farsi license plate irrespective of image contrast, lack of clarity, distance cars, and camera rotation. In addition, more than one car can be existed in image. The proposed method extracts edges and then determines the candidate regions by using adaptive image enhancement and applied a window movement. Finally by region elements analysis, the license plates are detected. The region elements analysis is working according to the plate geometric structure, continuity and parallelism. The algorithm has been run on the proposed database which includes 300 images and the obtained average accuracy is considerable.
{"title":"Farsi license plate detection based on element analysis in complex images","authors":"M. Rasooli, S. Ghofrani, A. Ahmadi","doi":"10.1109/PIC.2010.5687912","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687912","url":null,"abstract":"In this paper, a safe and powerful method is presented which can detect and identify Farsi license plate irrespective of image contrast, lack of clarity, distance cars, and camera rotation. In addition, more than one car can be existed in image. The proposed method extracts edges and then determines the candidate regions by using adaptive image enhancement and applied a window movement. Finally by region elements analysis, the license plates are detected. The region elements analysis is working according to the plate geometric structure, continuity and parallelism. The algorithm has been run on the proposed database which includes 300 images and the obtained average accuracy is considerable.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687407
Zhihua Wei, D. Miao, Rui Zhao, Chen Xie, Zhifei Zhang
Search engine has played an important role in information society. However, it is not very easy to find interest information from too much returned search results. Web search visualization system aims at helping users to locate interest documents rapidly from a great amount of returned search results. This paper explores visualization of Web search results based on multi-label text classification method. It conducts a multi-label classification process on the results from search engine. In this framework, users could browse interest information according to category label added by our algorithm. A paralleled Naïve Bayes multi-label classification algorithm is proposed for this application. A two-step feature selection algorithm is constructed to reduce the effect on Naïve Bayes classifier resulted from feature correlation and feature redundancy. A prototype system, named TJ-MLWC, is developed, which has the function of browsing search results by one or several categories.
{"title":"Visualizing search results based on multi-label classification","authors":"Zhihua Wei, D. Miao, Rui Zhao, Chen Xie, Zhifei Zhang","doi":"10.1109/PIC.2010.5687407","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687407","url":null,"abstract":"Search engine has played an important role in information society. However, it is not very easy to find interest information from too much returned search results. Web search visualization system aims at helping users to locate interest documents rapidly from a great amount of returned search results. This paper explores visualization of Web search results based on multi-label text classification method. It conducts a multi-label classification process on the results from search engine. In this framework, users could browse interest information according to category label added by our algorithm. A paralleled Naïve Bayes multi-label classification algorithm is proposed for this application. A two-step feature selection algorithm is constructed to reduce the effect on Naïve Bayes classifier resulted from feature correlation and feature redundancy. A prototype system, named TJ-MLWC, is developed, which has the function of browsing search results by one or several categories.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133760986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687866
Lina Zhao, L. Guan
MP4 is a popular kind of multimedia container format. Most of multimedia players can play MP4 files. Since multimedia player is an embedded system with limited CPU and memory, it is very important to improve the efficiency of parsing MP4 metadata. In this paper we provide an optimized method for parsing metadata. The experimental result shows that the optimized method can obviously lower down the CPU load and save the memory utilization. We also provide an implementation method for converting media data in MP4 files to audio and video streams, which the decoder of multimedia players is able to play.
{"title":"An optimized method and implementation for parsing MP4 metadata","authors":"Lina Zhao, L. Guan","doi":"10.1109/PIC.2010.5687866","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687866","url":null,"abstract":"MP4 is a popular kind of multimedia container format. Most of multimedia players can play MP4 files. Since multimedia player is an embedded system with limited CPU and memory, it is very important to improve the efficiency of parsing MP4 metadata. In this paper we provide an optimized method for parsing metadata. The experimental result shows that the optimized method can obviously lower down the CPU load and save the memory utilization. We also provide an implementation method for converting media data in MP4 files to audio and video streams, which the decoder of multimedia players is able to play.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687400
Qiang-rong Jiang, Qianqian Lu
Face recognition involves computer recognition of personal identity based on geometric or statistical features derived from face images. Even though humans can detect and identify faces in a scene with little or no effort, getting a computer to accomplish such objectives is very challenging. Researchers have been always investigating simple, accurate, and convenient approach to achieve face recognition. In this paper, we propose a novel multiple kernels method, which is based on cycle kernel and gray kernels. Experimental results indicate that multiple kernels method gets good performance.
{"title":"Face recognition based on cycle kernel and gray kernels","authors":"Qiang-rong Jiang, Qianqian Lu","doi":"10.1109/PIC.2010.5687400","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687400","url":null,"abstract":"Face recognition involves computer recognition of personal identity based on geometric or statistical features derived from face images. Even though humans can detect and identify faces in a scene with little or no effort, getting a computer to accomplish such objectives is very challenging. Researchers have been always investigating simple, accurate, and convenient approach to achieve face recognition. In this paper, we propose a novel multiple kernels method, which is based on cycle kernel and gray kernels. Experimental results indicate that multiple kernels method gets good performance.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124842012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687593
Haiyan Zhao, Yanlan Chen
To obtain and maintain user models are very important for personalized service offering. The quality of personalized services directly relies on the quality of the user models. Not surprisingly, many web sites have adopted different ways to construct user models so that they can recommend goods or services to individuals according to user's preference. Obviously, to integrate separate user models coming from different sources for one user can provide more comprehensive and accurate user information. Thus there exist the requirements to obtain integrated user models by sharing their user models. After discussing the requirements of sharing user models, an open user model service platform is presented. It's architecture, key technologies and an implemented prototype are introduced.
{"title":"An open user model service platform","authors":"Haiyan Zhao, Yanlan Chen","doi":"10.1109/PIC.2010.5687593","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687593","url":null,"abstract":"To obtain and maintain user models are very important for personalized service offering. The quality of personalized services directly relies on the quality of the user models. Not surprisingly, many web sites have adopted different ways to construct user models so that they can recommend goods or services to individuals according to user's preference. Obviously, to integrate separate user models coming from different sources for one user can provide more comprehensive and accurate user information. Thus there exist the requirements to obtain integrated user models by sharing their user models. After discussing the requirements of sharing user models, an open user model service platform is presented. It's architecture, key technologies and an implemented prototype are introduced.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125469614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687479
Zhen Li, Guo Zhang, H. Pan, Qiang Qiang
The use of stereoscopic SAR images offers an alternative to conventional stereo-photogrammetric survey for the generation of Digital Elevation Models (DEMs). Often the SAR ground-range form is more popular with the commercial users, since the pixel spacing on the ground is roughly the same for the different look-angle images. The different mathematical descriptions between the slant-range and ground-range products thus make the stereo modeling and adjustment a challenging objective to deal with. Previous work applied sensor model adjustment of range and timing parameters to SAR spotlight range images, promising 3-D mapping accuracies in the range of 2 m. However, this adopted the direct least squares method, which is too sensitive in the case of the ground-range images, making it less than optimal for SAR range images' stereo restitution. In this paper, an image based transformation (geometric correction) using a small number of control points(CPs) in cooperation with the Rational Polynomial Coefficient (RPC) model to improve the space intersection accuracy is proposed. The development of such an RPC-based adjustment method is first described, which is very practical to implement and can be applied to SAR slant- or ground-range products. By situating several well-distributed trihedral corner reflectors (CRs) within sites and imaging these sites using COSMO-SkyMed's stripmap (SM) mode, the modeling quality of the delivered slant- or ground-range products was validated, and the 3-D mapping potential was also assessed.
{"title":"RPC-based adjustment model for COSMO-SkyMed stereo slant/ground-range images","authors":"Zhen Li, Guo Zhang, H. Pan, Qiang Qiang","doi":"10.1109/PIC.2010.5687479","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687479","url":null,"abstract":"The use of stereoscopic SAR images offers an alternative to conventional stereo-photogrammetric survey for the generation of Digital Elevation Models (DEMs). Often the SAR ground-range form is more popular with the commercial users, since the pixel spacing on the ground is roughly the same for the different look-angle images. The different mathematical descriptions between the slant-range and ground-range products thus make the stereo modeling and adjustment a challenging objective to deal with. Previous work applied sensor model adjustment of range and timing parameters to SAR spotlight range images, promising 3-D mapping accuracies in the range of 2 m. However, this adopted the direct least squares method, which is too sensitive in the case of the ground-range images, making it less than optimal for SAR range images' stereo restitution. In this paper, an image based transformation (geometric correction) using a small number of control points(CPs) in cooperation with the Rational Polynomial Coefficient (RPC) model to improve the space intersection accuracy is proposed. The development of such an RPC-based adjustment method is first described, which is very practical to implement and can be applied to SAR slant- or ground-range products. By situating several well-distributed trihedral corner reflectors (CRs) within sites and imaging these sites using COSMO-SkyMed's stripmap (SM) mode, the modeling quality of the delivered slant- or ground-range products was validated, and the 3-D mapping potential was also assessed.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131981712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687960
Li Mao, Deyu Qi, Xiaoxi Li
The bioadhesive drug delivery systems using satrch-based colon-targeted drug carriers have drawn great attention in the field of pharmaceutical science in resent years. A Neural Network (NN) prediction model was developed based on hibrid method of improved genetic algorithms (GA) and conjugate gradient algorithm for backpropagation(GDBP) NN according to key factors that affect releasing behaviors of satrch-based colon-targeted drug carrier. In particular, function approximation capability and high efficciency of GDBP NN is used to simulate nonlinear relation between key factors and drug carrier releasing behaviors. Futhermore, the simulation results indicate that compared with traditional GA-BP NN, training efficiency of GA-GDBP NN has been greatly improved. Consequently, the model finds a new way to predict drug carrier releasing behaviors and instructs factors seting in real experiments.
{"title":"Improved GA combined with GDBP algorithm for forecasting releasing behaviors of drug carrier","authors":"Li Mao, Deyu Qi, Xiaoxi Li","doi":"10.1109/PIC.2010.5687960","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687960","url":null,"abstract":"The bioadhesive drug delivery systems using satrch-based colon-targeted drug carriers have drawn great attention in the field of pharmaceutical science in resent years. A Neural Network (NN) prediction model was developed based on hibrid method of improved genetic algorithms (GA) and conjugate gradient algorithm for backpropagation(GDBP) NN according to key factors that affect releasing behaviors of satrch-based colon-targeted drug carrier. In particular, function approximation capability and high efficciency of GDBP NN is used to simulate nonlinear relation between key factors and drug carrier releasing behaviors. Futhermore, the simulation results indicate that compared with traditional GA-BP NN, training efficiency of GA-GDBP NN has been greatly improved. Consequently, the model finds a new way to predict drug carrier releasing behaviors and instructs factors seting in real experiments.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134325465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687974
Zheng Gui, Peng Xie
Reliability is a fatal factor in the operation of automatic container terminal. RTK GPS technology could be adopted for container and equipment positioning, structure deformation detecting, travel control and safety survey to enhance the reliability. Also some steps that could be taken up to backup the operation of GPS are discussed, which involve sheltering compensation, standby communication, standby reference station and receiver calibrating.
{"title":"RTK GPS enhanced reliability for automatic container terminal","authors":"Zheng Gui, Peng Xie","doi":"10.1109/PIC.2010.5687974","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687974","url":null,"abstract":"Reliability is a fatal factor in the operation of automatic container terminal. RTK GPS technology could be adopted for container and equipment positioning, structure deformation detecting, travel control and safety survey to enhance the reliability. Also some steps that could be taken up to backup the operation of GPS are discussed, which involve sheltering compensation, standby communication, standby reference station and receiver calibrating.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"81 51","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133724814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}