Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253744
Qiaoling Wang, Changhong Wang, X. Gao
This paper first discusses the background knowledge of the clonal selection algorithm and particle swarm method. The clonal selection algorithm is imitated by the basic principle of the adaptive immune response to virus stimulus. The particle swarm optimization is motivated by the social behaviors of swarms. Inspired by these two optimization methods, we propose a hybrid optimization algorithm in this paper. The steps of this hybrid optimization algorithm are described in details, and its performance is evaluated hybrid unidimensional function optimization and three multidimensional functions optimization problems. It is also compared with both the clonal selection algorithm and particle swarm method based on numerical simulations
{"title":"A Hybrid Optimization Algorithm based on Clonal Selection Principle and Particle Swarm Intelligence","authors":"Qiaoling Wang, Changhong Wang, X. Gao","doi":"10.1109/ISDA.2006.253744","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253744","url":null,"abstract":"This paper first discusses the background knowledge of the clonal selection algorithm and particle swarm method. The clonal selection algorithm is imitated by the basic principle of the adaptive immune response to virus stimulus. The particle swarm optimization is motivated by the social behaviors of swarms. Inspired by these two optimization methods, we propose a hybrid optimization algorithm in this paper. The steps of this hybrid optimization algorithm are described in details, and its performance is evaluated hybrid unidimensional function optimization and three multidimensional functions optimization problems. It is also compared with both the clonal selection algorithm and particle swarm method based on numerical simulations","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253871
Dongdong Cao, Ping Guo
Classification of multisource remote sensing images has been studied for decades, and many methods have been proposed. Most of these studies focus on how to improve the classifiers in order to obtain higher classification accuracy. However, as we know, even if the most promising neural network method, its good performance not only depends on the classifier itself, but also has relation to the training pattern (i.e. features). On consideration of this aspect, we propose an approach to feature selection and classification of multisource remote sensing image based on residual error in this paper. In particular, a feature-selection scheme approach is proposed, which is to select effective subsets of features as inputs of a classifier by taking into account the residual error associated with each land-cover class. In addition, a classification technique base on selected features by using a feedforward neural network is investigated. The results of experiments carried out on a multisource data set confirm the validity of the proposed approach
{"title":"Residual Error based Approach to Classification of Multisource Remote Sensing Images","authors":"Dongdong Cao, Ping Guo","doi":"10.1109/ISDA.2006.253871","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253871","url":null,"abstract":"Classification of multisource remote sensing images has been studied for decades, and many methods have been proposed. Most of these studies focus on how to improve the classifiers in order to obtain higher classification accuracy. However, as we know, even if the most promising neural network method, its good performance not only depends on the classifier itself, but also has relation to the training pattern (i.e. features). On consideration of this aspect, we propose an approach to feature selection and classification of multisource remote sensing image based on residual error in this paper. In particular, a feature-selection scheme approach is proposed, which is to select effective subsets of features as inputs of a classifier by taking into account the residual error associated with each land-cover class. In addition, a classification technique base on selected features by using a feedforward neural network is investigated. The results of experiments carried out on a multisource data set confirm the validity of the proposed approach","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114265495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253916
Xiu-ying Wang, Jinfei Sun, Ai-guang Yang, T. Chai
Software component has played an important role in modern software and system development. Based on the concept and technology of the software reuse, we have developed a scheduling system of steelmaking and continuous casting for the Iron and Steel Corporation of China. The reusable components, which include the making plan component, the evaluation component, the dynamic adjusting component and the Gantt simulation editor component, are abstracted from this system. These components may be reused to construct a new scheduling system for other steel plant. The abstracting strategies are presented in this paper. Moreover, the formalization description for the making plan component and the dynamic adjusting component are presented. The waste of both manpower and resource is avoided by causing repetition development in the same trade
{"title":"The Design and Development of A Scheduling System for Steelmaking and Continuous Casting based on Component Technology","authors":"Xiu-ying Wang, Jinfei Sun, Ai-guang Yang, T. Chai","doi":"10.1109/ISDA.2006.253916","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253916","url":null,"abstract":"Software component has played an important role in modern software and system development. Based on the concept and technology of the software reuse, we have developed a scheduling system of steelmaking and continuous casting for the Iron and Steel Corporation of China. The reusable components, which include the making plan component, the evaluation component, the dynamic adjusting component and the Gantt simulation editor component, are abstracted from this system. These components may be reused to construct a new scheduling system for other steel plant. The abstracting strategies are presented in this paper. Moreover, the formalization description for the making plan component and the dynamic adjusting component are presented. The waste of both manpower and resource is avoided by causing repetition development in the same trade","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"302 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129597292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253843
M. Kamran, S. A. Qureshi, Feng Shi, Yizhuo Wang, Yumin Xie
The architecture proposed in this paper performs all inevitable operations effectively to compress the data for image processing. The given behavioral architecture is not only verified algorithmically but also gave clear snapshot of the facts hindering in the efficient operation of data transfer with appropriate results. Moreover, this paper is the continuation of concurrent compression pre coder design to which DCT coders are to be appended for transformation and compression operation. The compression scenario is tackled in the coders by utilizing adaptive Huffman coding scheme. The most distinguishable characteristic of proposed design is the pre-coder attachment to the coders working in any format, i.e., JPEG or MPEG. In either case the results obtained are reliable and significantly appreciable with respect to image quality to support proposed architecture even in a system with variable band widths
{"title":"A Novel Image Compression Architecture with proficient Layered scenario","authors":"M. Kamran, S. A. Qureshi, Feng Shi, Yizhuo Wang, Yumin Xie","doi":"10.1109/ISDA.2006.253843","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253843","url":null,"abstract":"The architecture proposed in this paper performs all inevitable operations effectively to compress the data for image processing. The given behavioral architecture is not only verified algorithmically but also gave clear snapshot of the facts hindering in the efficient operation of data transfer with appropriate results. Moreover, this paper is the continuation of concurrent compression pre coder design to which DCT coders are to be appended for transformation and compression operation. The compression scenario is tackled in the coders by utilizing adaptive Huffman coding scheme. The most distinguishable characteristic of proposed design is the pre-coder attachment to the coders working in any format, i.e., JPEG or MPEG. In either case the results obtained are reliable and significantly appreciable with respect to image quality to support proposed architecture even in a system with variable band widths","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128645376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253894
Feng Xue, Zhong Liu, Zhangsong Shi
An out-of-sequence measurements (OOSMs) processing algorithm for improving the passive tracking performance in wireless sensor networks is proposed. Firstly, decentralized tracking structure is organized by the dynamic clustering of sensor nodes, and cluster heads collect the measurements from child nodes to form the local estimate. Then, particle filter scheme is presented to solve OOSM problem in this decentralized structure. Due to the limited exploration capability of proposal density, unscented particle filter (UPF) is used to incorporate the most current measurement and to generate the proposal distribution of the particle filter. The detailed implementation steps of OOSM based on UPF (OOSM-UPF) are deduced. Finally, the bearings-only tracking state space is modeled by turn rate model, and 3D simulation scenario is constructed to test several filters for OOSM. Simulation results show that performance of OOSM-UPF is much improved than other schemes
{"title":"Unscented Particle Filter for Bearings-only Tracking with Out-of-Sequence Measurements in Sensor Networks","authors":"Feng Xue, Zhong Liu, Zhangsong Shi","doi":"10.1109/ISDA.2006.253894","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253894","url":null,"abstract":"An out-of-sequence measurements (OOSMs) processing algorithm for improving the passive tracking performance in wireless sensor networks is proposed. Firstly, decentralized tracking structure is organized by the dynamic clustering of sensor nodes, and cluster heads collect the measurements from child nodes to form the local estimate. Then, particle filter scheme is presented to solve OOSM problem in this decentralized structure. Due to the limited exploration capability of proposal density, unscented particle filter (UPF) is used to incorporate the most current measurement and to generate the proposal distribution of the particle filter. The detailed implementation steps of OOSM based on UPF (OOSM-UPF) are deduced. Finally, the bearings-only tracking state space is modeled by turn rate model, and 3D simulation scenario is constructed to test several filters for OOSM. Simulation results show that performance of OOSM-UPF is much improved than other schemes","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124581774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongyan Shi, Xiaoming Sun, Changzhi Sun, Dongyang Chen, Yuejun An
By using the two methods of Lyapunov exponent and power spectra analysis, it is tested and verified that chaotic phenomenon exists in time sequence of distance information that autonomous mobile robot has got from sensors between robots and obstacles. The complexity of path planning for autonomous mobile robot through sensors is explained under dynamic environments by use of chaotic phenomenon. The existence of chaotic phenomenon makes the path planning for autonomous mobile robot under dynamic environments NP-hard and also lays the foundation for revealing why navigation of autonomous mobile robot under dynamic environments is NP-hard
{"title":"Research of the Path Planning Complexity for Autonomous Mobile Robot under Dynamic Environments","authors":"Hongyan Shi, Xiaoming Sun, Changzhi Sun, Dongyang Chen, Yuejun An","doi":"10.1109/ISDA.2006.38","DOIUrl":"https://doi.org/10.1109/ISDA.2006.38","url":null,"abstract":"By using the two methods of Lyapunov exponent and power spectra analysis, it is tested and verified that chaotic phenomenon exists in time sequence of distance information that autonomous mobile robot has got from sensors between robots and obstacles. The complexity of path planning for autonomous mobile robot through sensors is explained under dynamic environments by use of chaotic phenomenon. The existence of chaotic phenomenon makes the path planning for autonomous mobile robot under dynamic environments NP-hard and also lays the foundation for revealing why navigation of autonomous mobile robot under dynamic environments is NP-hard","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124207052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253832
Wang Yutai, Liu Feng, Li Nianqiang
The sample preparation of particle size measurement is divided into wet and dry dispersion respectively. The wet dispersion method includes rabbling, ultrasonic and dispersant, and has been already basically ripe. The dry dispersion only has flow velocity and gas pressure, whose sample preparation is more difficult than the wet dispersion. But because the cement is water-solubility, the dry dispersion technology is the only method to use. The laser diffraction and Mie scattering theory model were discussed in the paper. The dry dispersion suspension principle and multiple scattering were also stated briefly. In order to meet actual demands, the analysis instrument was furnished with the "particle analysis expert" software system. The measuring range of the dry cement laser particle instrument is 0.1~600 micron. Its measuring time is less than 30 seconds. According to experimental result, the instrument has many application advantages such as high accuracy, good repeatability, quick testing speed and so on
{"title":"Study on the Intelligent Laser Instrument of Cement Particle Size Measurement","authors":"Wang Yutai, Liu Feng, Li Nianqiang","doi":"10.1109/ISDA.2006.253832","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253832","url":null,"abstract":"The sample preparation of particle size measurement is divided into wet and dry dispersion respectively. The wet dispersion method includes rabbling, ultrasonic and dispersant, and has been already basically ripe. The dry dispersion only has flow velocity and gas pressure, whose sample preparation is more difficult than the wet dispersion. But because the cement is water-solubility, the dry dispersion technology is the only method to use. The laser diffraction and Mie scattering theory model were discussed in the paper. The dry dispersion suspension principle and multiple scattering were also stated briefly. In order to meet actual demands, the analysis instrument was furnished with the \"particle analysis expert\" software system. The measuring range of the dry cement laser particle instrument is 0.1~600 micron. Its measuring time is less than 30 seconds. According to experimental result, the instrument has many application advantages such as high accuracy, good repeatability, quick testing speed and so on","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123530946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253699
Xiuling He, Yang Yang, Zengzhao Chen, Ying Yu, Cailin Dong
Data fields in a form contain invaluable information so that field data extraction is important in form processing. Most conventional data extraction methods depend on the line frames. However there are many types of forms such as table-form and non table-form that are without line frames and popularly used in our daily lives as well. A new field locating method using priori knowledge is proposed. It is based on two-level regulated hit or miss transform features of form background. The feasibility of the method is proved theoretically. Experiments and application show that it is several dozen times faster than ordinary regulated hit or miss transform and the correct rate is satisfied with real applications
{"title":"Field Extraction Based on Two-level Regulated HMT in Auto Form Processing","authors":"Xiuling He, Yang Yang, Zengzhao Chen, Ying Yu, Cailin Dong","doi":"10.1109/ISDA.2006.253699","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253699","url":null,"abstract":"Data fields in a form contain invaluable information so that field data extraction is important in form processing. Most conventional data extraction methods depend on the line frames. However there are many types of forms such as table-form and non table-form that are without line frames and popularly used in our daily lives as well. A new field locating method using priori knowledge is proposed. It is based on two-level regulated hit or miss transform features of form background. The feasibility of the method is proved theoretically. Experiments and application show that it is several dozen times faster than ordinary regulated hit or miss transform and the correct rate is satisfied with real applications","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a method of determining design ground motion for critical engineering structures, based on probabilistic seismic hazard analysis (FSHA). Effective peak acceleration (EPA) is used as basic ground motion parameter in this paper. Comparing the contribution of dominant potential seismic source at long period (T = 1.0s) and short period (T = 1.0s), we suggest that the dominant potential seismic source be determined according to the dynamic property (e.g. expected period) of the engineering structures. And then we utilize the contribution functions of some dominant sources, and attenuation law to determine design earthquake magnitude and epicentral distance. According to these results, design ground motion parameters could be determined reasonably
{"title":"Determination of Design Ground Motion For Critical Engineering Structures Based on Probabilistic Seismic Hazard Analysis","authors":"Guoxin Wang, Zhen Zhao","doi":"10.1109/ISDA.2006.135","DOIUrl":"https://doi.org/10.1109/ISDA.2006.135","url":null,"abstract":"This paper proposes a method of determining design ground motion for critical engineering structures, based on probabilistic seismic hazard analysis (FSHA). Effective peak acceleration (EPA) is used as basic ground motion parameter in this paper. Comparing the contribution of dominant potential seismic source at long period (T = 1.0s) and short period (T = 1.0s), we suggest that the dominant potential seismic source be determined according to the dynamic property (e.g. expected period) of the engineering structures. And then we utilize the contribution functions of some dominant sources, and attenuation law to determine design earthquake magnitude and epicentral distance. According to these results, design ground motion parameters could be determined reasonably","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114311847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-10-16DOI: 10.1109/ISDA.2006.253876
Yuehui Sun
A novel face detection algorithm is presented by applying dual tree complex wavelets transform (DT-CWT) on spectral histogram PCA space (SHPCA) and support vector machine (SVM). DT-CWT is a transform recently studied, which provides good directional selectivity in six different fixed orientations at different scales. It has limited redundancy for images and is much faster than Gabor transform to compute. Hence, DT-CWT is a good choice to replace Gabor transform in some image signals processing fields especially for face images representation. In the face detection algorithm presented in this paper, images are first projected to SHPCA space after convolved with different filters including DT-CWT filters to achieve features subtraction based on frequency. Then on SHPCA space SVM classification is applied to detect whether faces exist in images or not. The experimental results show that DT-CWT performs much better than Gabor transform on SHPCA space. Furthermore, during preliminary experiments, SVM based on SHPCA space has been trained on a training set of 4000 faces aligned and 6000 non-face images, and a robust classifying function for face and non-face pattern is obtained, which gives the satisfying performance. Several questions about computation time saving and performance improvement are discussed
{"title":"Face Detection using DT-CWT on SHPCA Space","authors":"Yuehui Sun","doi":"10.1109/ISDA.2006.253876","DOIUrl":"https://doi.org/10.1109/ISDA.2006.253876","url":null,"abstract":"A novel face detection algorithm is presented by applying dual tree complex wavelets transform (DT-CWT) on spectral histogram PCA space (SHPCA) and support vector machine (SVM). DT-CWT is a transform recently studied, which provides good directional selectivity in six different fixed orientations at different scales. It has limited redundancy for images and is much faster than Gabor transform to compute. Hence, DT-CWT is a good choice to replace Gabor transform in some image signals processing fields especially for face images representation. In the face detection algorithm presented in this paper, images are first projected to SHPCA space after convolved with different filters including DT-CWT filters to achieve features subtraction based on frequency. Then on SHPCA space SVM classification is applied to detect whether faces exist in images or not. The experimental results show that DT-CWT performs much better than Gabor transform on SHPCA space. Furthermore, during preliminary experiments, SVM based on SHPCA space has been trained on a training set of 4000 faces aligned and 6000 non-face images, and a robust classifying function for face and non-face pattern is obtained, which gives the satisfying performance. Several questions about computation time saving and performance improvement are discussed","PeriodicalId":116729,"journal":{"name":"Sixth International Conference on Intelligent Systems Design and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121646685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}