Pub Date : 2012-08-01Epub Date: 2012-03-22DOI: 10.1109/TSMCB.2012.2187891
Serkan Kiranyaz, Turker Ince, Stefan Uhlmann, Moncef Gabbouj
Terrain classification over polarimetric synthetic aperture radar (SAR) images has been an active research field where several features and classifiers have been proposed up to date. However, some key questions, e.g., 1) how to select certain features so as to achieve highest discrimination over certain classes?, 2) how to combine them in the most effective way?, 3) which distance metric to apply?, 4) how to find the optimal classifier configuration for the classification problem in hand?, 5) how to scale/adapt the classifier if large number of classes/features are present?, and finally, 6) how to train the classifier efficiently to maximize the classification accuracy?, still remain unanswered. In this paper, we propose a collective network of (evolutionary) binary classifier (CNBC) framework to address all these problems and to achieve high classification performance. The CNBC framework adapts a "Divide and Conquer" type approach by allocating several NBCs to discriminate each class and performs evolutionary search to find the optimal BC in each NBC. In such an (incremental) evolution session, the CNBC body can further dynamically adapt itself with each new incoming class/feature set without a full-scale retraining or reconfiguration. Both visual and numerical performance evaluations of the proposed framework over two benchmark SAR images demonstrate its superiority and a significant performance gap against several major classifiers in this field.
{"title":"Collective Network of Binary Classifier Framework for Polarimetric SAR Image Classification: An Evolutionary Approach.","authors":"Serkan Kiranyaz, Turker Ince, Stefan Uhlmann, Moncef Gabbouj","doi":"10.1109/TSMCB.2012.2187891","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2187891","url":null,"abstract":"<p><p>Terrain classification over polarimetric synthetic aperture radar (SAR) images has been an active research field where several features and classifiers have been proposed up to date. However, some key questions, e.g., 1) how to select certain features so as to achieve highest discrimination over certain classes?, 2) how to combine them in the most effective way?, 3) which distance metric to apply?, 4) how to find the optimal classifier configuration for the classification problem in hand?, 5) how to scale/adapt the classifier if large number of classes/features are present?, and finally, 6) how to train the classifier efficiently to maximize the classification accuracy?, still remain unanswered. In this paper, we propose a collective network of (evolutionary) binary classifier (CNBC) framework to address all these problems and to achieve high classification performance. The CNBC framework adapts a \"Divide and Conquer\" type approach by allocating several NBCs to discriminate each class and performs evolutionary search to find the optimal BC in each NBC. In such an (incremental) evolution session, the CNBC body can further dynamically adapt itself with each new incoming class/feature set without a full-scale retraining or reconfiguration. Both visual and numerical performance evaluations of the proposed framework over two benchmark SAR images demonstrate its superiority and a significant performance gap against several major classifiers in this field. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1169-86"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2187891","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30557111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-04-03DOI: 10.1109/TSMCB.2012.2189003
Zhijun Zhang, Yunong Zhang
In this paper, a cyclic-motion generation (CMG) scheme at the acceleration level is proposed to remedy the joint-angle drift phenomenon of redundant robot manipulators which are controlled at the joint-acceleration level or torque level. To achieve this, a cyclic-motion criterion at the joint-acceleration level is exploited. This criterion, together with the joint-angle limits, joint-velocity limits, and joint-acceleration limits, is considered into the scheme formulation. In addition, the neural-dynamic method of Zhang is employed to explain and analyze the effectiveness of the proposed criterion. Then, the scheme is reformulated as a quadratic program, which is solved by a primal-dual neural network. Furthermore, four tracking path simulations verify the effectiveness and accuracy of the proposed acceleration-level CMG scheme. Moreover, the comparisons between the proposed acceleration-level CMG scheme and the velocity-level scheme demonstrate that the former is safer and more applicable. The experiment on a physical robot system further verifies the physical realizability of the proposed acceleration-level CMG scheme.
{"title":"Acceleration-Level Cyclic-Motion Generation of Constrained Redundant Robots Tracking Different Paths.","authors":"Zhijun Zhang, Yunong Zhang","doi":"10.1109/TSMCB.2012.2189003","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2189003","url":null,"abstract":"<p><p>In this paper, a cyclic-motion generation (CMG) scheme at the acceleration level is proposed to remedy the joint-angle drift phenomenon of redundant robot manipulators which are controlled at the joint-acceleration level or torque level. To achieve this, a cyclic-motion criterion at the joint-acceleration level is exploited. This criterion, together with the joint-angle limits, joint-velocity limits, and joint-acceleration limits, is considered into the scheme formulation. In addition, the neural-dynamic method of Zhang is employed to explain and analyze the effectiveness of the proposed criterion. Then, the scheme is reformulated as a quadratic program, which is solved by a primal-dual neural network. Furthermore, four tracking path simulations verify the effectiveness and accuracy of the proposed acceleration-level CMG scheme. Moreover, the comparisons between the proposed acceleration-level CMG scheme and the velocity-level scheme demonstrate that the former is safer and more applicable. The experiment on a physical robot system further verifies the physical realizability of the proposed acceleration-level CMG scheme. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1257-69"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2189003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30557113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-06DOI: 10.1109/TSMCB.2012.2186798
Chunghoon Kim, Sang-Il Choi, M Turk, Chong-Ho Choi
We propose a new biased discriminant analysis (BDA) using composite vectors for eye detection. A composite vector consists of several pixels inside a window on an image. The covariance of composite vectors is obtained from their inner product and can be considered as a generalization of the covariance of pixels. The proposed composite BDA (C-BDA) method is a BDA using the covariance of composite vectors. We construct a hybrid cascade detector for eye detection, using Haar-like features in the earlier stages and composite features obtained from C-BDA in the later stages. The proposed detector runs in real time; its execution time is 5.5 ms on a typical PC. The experimental results for the CMU PIE database and our own real-world data set show that the proposed detector provides robust performance to several kinds of variations such as facial pose, illumination, eyeglasses, and partial occlusion. On the whole, the detection rate per pair of eyes is 98.0% for the 3604 face images of the CMU PIE database and 95.1% for the 2331 face images of the real-world data set. In particular, it provides a 99.7% detection rate for the 2120 CMU PIE images without glasses. Face recognition performance is also investigated using the eye coordinates from the proposed detector. The recognition results for the real-world data set show that the proposed detector gives similar performance to the method using manually located eye coordinates, showing that the accuracy of the proposed eye detector is comparable with that of the ground-truth data.
{"title":"A New Biased Discriminant Analysis Using Composite Vectors for Eye Detection.","authors":"Chunghoon Kim, Sang-Il Choi, M Turk, Chong-Ho Choi","doi":"10.1109/TSMCB.2012.2186798","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2186798","url":null,"abstract":"<p><p>We propose a new biased discriminant analysis (BDA) using composite vectors for eye detection. A composite vector consists of several pixels inside a window on an image. The covariance of composite vectors is obtained from their inner product and can be considered as a generalization of the covariance of pixels. The proposed composite BDA (C-BDA) method is a BDA using the covariance of composite vectors. We construct a hybrid cascade detector for eye detection, using Haar-like features in the earlier stages and composite features obtained from C-BDA in the later stages. The proposed detector runs in real time; its execution time is 5.5 ms on a typical PC. The experimental results for the CMU PIE database and our own real-world data set show that the proposed detector provides robust performance to several kinds of variations such as facial pose, illumination, eyeglasses, and partial occlusion. On the whole, the detection rate per pair of eyes is 98.0% for the 3604 face images of the CMU PIE database and 95.1% for the 2331 face images of the real-world data set. In particular, it provides a 99.7% detection rate for the 2120 CMU PIE images without glasses. Face recognition performance is also investigated using the eye coordinates from the proposed detector. The recognition results for the real-world data set show that the proposed detector gives similar performance to the method using manually located eye coordinates, showing that the accuracy of the proposed eye detector is comparable with that of the ground-truth data. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1095-106"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2186798","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40156309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-02-29DOI: 10.1109/TSMCB.2012.2185842
Meiqin Liu, Senlin Zhang, Zhen Fan, Meikang Qiu
This paper is concerned with the problem of state estimation for a class of discrete-time chaotic systems with or without time delays. A unified model consisting of a linear dynamic system and a bounded static nonlinear operator is employed to describe these systems, such as chaotic neural networks, Chua's circuits, Hénon map, etc. Based on the H∞ performance analysis of this unified model using the linear matrix inequality approach, H∞ state estimator are designed for this model with sensors to guarantee the asymptotic stability of the estimation error dynamic systems and to reduce the influence of noise on the estimation error. The parameters of these filters are obtained by solving the eigenvalue problem. As most discrete-time chaotic systems with or without time delays can be described with this unified model, H∞ state estimator design for these systems can be done in a unified way. Three numerical examples are exploited to illustrate the effectiveness of the proposed estimator design schemes.
{"title":"H∞ State Estimation for Discrete-Time Chaotic Systems Based on a Unified Model.","authors":"Meiqin Liu, Senlin Zhang, Zhen Fan, Meikang Qiu","doi":"10.1109/TSMCB.2012.2185842","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2185842","url":null,"abstract":"<p><p>This paper is concerned with the problem of state estimation for a class of discrete-time chaotic systems with or without time delays. A unified model consisting of a linear dynamic system and a bounded static nonlinear operator is employed to describe these systems, such as chaotic neural networks, Chua's circuits, Hénon map, etc. Based on the H∞ performance analysis of this unified model using the linear matrix inequality approach, H∞ state estimator are designed for this model with sensors to guarantee the asymptotic stability of the estimation error dynamic systems and to reduce the influence of noise on the estimation error. The parameters of these filters are obtained by solving the eigenvalue problem. As most discrete-time chaotic systems with or without time delays can be described with this unified model, H∞ state estimator design for these systems can be done in a unified way. Three numerical examples are exploited to illustrate the effectiveness of the proposed estimator design schemes. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1053-63"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2185842","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30505060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-05-03DOI: 10.1109/TSMCB.2012.2194701
U Tariq, Kai-Hsiang Lin, Zhen Li, Xi Zhou, Zhaowen Wang, Vuong Le, T S Huang, Xutao Lv, T X Han
This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects.
{"title":"Recognizing Emotions From an Ensemble of Features.","authors":"U Tariq, Kai-Hsiang Lin, Zhen Li, Xi Zhou, Zhaowen Wang, Vuong Le, T S Huang, Xutao Lv, T X Han","doi":"10.1109/TSMCB.2012.2194701","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2194701","url":null,"abstract":"<p><p>This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1017-26"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2194701","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30608808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-08DOI: 10.1109/TSMCB.2012.2188100
Zhilei Ren, He Jiang, Jifeng Xuan, Zhongxuan Luo
In this paper, we investigate how to design an efficient heuristic algorithm under the guideline of the backbone and the fat, in the context of the p-median problem. Given a problem instance, the backbone variables are defined as the variables shared by all optimal solutions, and the fat variables are defined as the variables that are absent from every optimal solution. Identification of the backbone (fat) variables is essential for the heuristic algorithms exploiting such structures. Since the existing exact identification method, i.e., limit crossing (LC), is time consuming and sensitive to the upper bounds, it is hard to incorporate LC into heuristic algorithm design. In this paper, we develop the accelerated-LC (ALC)-based multilevel algorithm (ALCMA). In contrast to LC which repeatedly runs the time-consuming Lagrangian relaxation (LR) procedure, ALC is introduced in ALCMA such that LR is performed only once, and every backbone (fat) variable can be determined in O(1) time. Meanwhile, the upper bound sensitivity is eliminated by a dynamic pseudo upper bound mechanism. By combining ALC with the pseudo upper bound, ALCMA can efficiently find high-quality solutions within a series of reduced search spaces. Extensive empirical results demonstrate that ALCMA outperforms existing heuristic algorithms in terms of the average solution quality.
{"title":"An Accelerated-Limit-Crossing-Based Multilevel Algorithm for the p-Median Problem.","authors":"Zhilei Ren, He Jiang, Jifeng Xuan, Zhongxuan Luo","doi":"10.1109/TSMCB.2012.2188100","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2188100","url":null,"abstract":"<p><p>In this paper, we investigate how to design an efficient heuristic algorithm under the guideline of the backbone and the fat, in the context of the p-median problem. Given a problem instance, the backbone variables are defined as the variables shared by all optimal solutions, and the fat variables are defined as the variables that are absent from every optimal solution. Identification of the backbone (fat) variables is essential for the heuristic algorithms exploiting such structures. Since the existing exact identification method, i.e., limit crossing (LC), is time consuming and sensitive to the upper bounds, it is hard to incorporate LC into heuristic algorithm design. In this paper, we develop the accelerated-LC (ALC)-based multilevel algorithm (ALCMA). In contrast to LC which repeatedly runs the time-consuming Lagrangian relaxation (LR) procedure, ALC is introduced in ALCMA such that LR is performed only once, and every backbone (fat) variable can be determined in O(1) time. Meanwhile, the upper bound sensitivity is eliminated by a dynamic pseudo upper bound mechanism. By combining ALC with the pseudo upper bound, ALCMA can efficiently find high-quality solutions within a series of reduced search spaces. Extensive empirical results demonstrate that ALCMA outperforms existing heuristic algorithms in terms of the average solution quality. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1187-202"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2188100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40159265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-01Epub Date: 2011-10-27DOI: 10.1109/TSMCB.2011.2171680
Jason Rhinelander, Xiaoping P Liu
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.
{"title":"Stochastic subset selection for learning with kernel machines.","authors":"Jason Rhinelander, Xiaoping P Liu","doi":"10.1109/TSMCB.2011.2171680","DOIUrl":"https://doi.org/10.1109/TSMCB.2011.2171680","url":null,"abstract":"<p><p>Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.</p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"616-26"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2011.2171680","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40125484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-01Epub Date: 2011-11-04DOI: 10.1109/TSMCB.2011.2171946
Changhe Li, Shengxiang Yang, Trung Thanh Nguyen
Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.
{"title":"A self-learning particle swarm optimizer for global optimization problems.","authors":"Changhe Li, Shengxiang Yang, Trung Thanh Nguyen","doi":"10.1109/TSMCB.2011.2171946","DOIUrl":"https://doi.org/10.1109/TSMCB.2011.2171946","url":null,"abstract":"<p><p>Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.</p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"627-46"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2011.2171946","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40139577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-01Epub Date: 2011-11-03DOI: 10.1109/TSMCB.2011.2170067
Witold Pedrycz, Andrzej Bargiela
Clustering forms one of the most visible conceptual and algorithmic framework of developing information granules. In spite of the algorithm being used, the representation of information granules-clusters is predominantly numeric (coming in the form of prototypes, partition matrices, dendrograms, etc.). In this paper, we consider a concept of granular prototypes that generalizes the numeric representation of the clusters and, in this way, helps capture more details about the data structure. By invoking the granulation-degranulation scheme, we design granular prototypes being reflective of the structure of data to a higher extent than the representation that is provided by their numeric counterparts (prototypes). The design is formulated as an optimization problem, which is guided by the coverage criterion, meaning that we maximize the number of data for which their granular realization includes the original data. The granularity of the prototypes themselves is treated as an important design asset; hence, its allocation to the individual prototypes is optimized so that the coverage criterion becomes maximized. With this regard, several schemes of optimal allocation of information granularity are investigated, where interval-valued prototypes are formed around the already produced numeric representatives. Experimental studies are provided in which the design of granular prototypes of interval format is discussed and characterized.
{"title":"An optimization of allocation of information granularity in the interpretation of data structures: toward granular fuzzy clustering.","authors":"Witold Pedrycz, Andrzej Bargiela","doi":"10.1109/TSMCB.2011.2170067","DOIUrl":"https://doi.org/10.1109/TSMCB.2011.2170067","url":null,"abstract":"<p><p>Clustering forms one of the most visible conceptual and algorithmic framework of developing information granules. In spite of the algorithm being used, the representation of information granules-clusters is predominantly numeric (coming in the form of prototypes, partition matrices, dendrograms, etc.). In this paper, we consider a concept of granular prototypes that generalizes the numeric representation of the clusters and, in this way, helps capture more details about the data structure. By invoking the granulation-degranulation scheme, we design granular prototypes being reflective of the structure of data to a higher extent than the representation that is provided by their numeric counterparts (prototypes). The design is formulated as an optimization problem, which is guided by the coverage criterion, meaning that we maximize the number of data for which their granular realization includes the original data. The granularity of the prototypes themselves is treated as an important design asset; hence, its allocation to the individual prototypes is optimized so that the coverage criterion becomes maximized. With this regard, several schemes of optimal allocation of information granularity are investigated, where interval-valued prototypes are formed around the already produced numeric representatives. Experimental studies are provided in which the design of granular prototypes of interval format is discussed and characterized.</p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"582-90"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2011.2170067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40139083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-01Epub Date: 2011-11-03DOI: 10.1109/TSMCB.2011.2172419
Goutham Mallapragada, Asok Ray, Xin Jin
This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.
{"title":"Symbolic dynamic filtering and language measure for behavior identification of mobile robots.","authors":"Goutham Mallapragada, Asok Ray, Xin Jin","doi":"10.1109/TSMCB.2011.2172419","DOIUrl":"https://doi.org/10.1109/TSMCB.2011.2172419","url":null,"abstract":"<p><p>This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.</p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"647-59"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2011.2172419","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40139579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}