Pub Date : 2012-08-01Epub Date: 2012-02-29DOI: 10.1109/TSMCB.2012.2186125
R M da Costa, A Gonzaga
The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.
{"title":"Dynamic Features for Iris Recognition.","authors":"R M da Costa, A Gonzaga","doi":"10.1109/TSMCB.2012.2186125","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2186125","url":null,"abstract":"<p><p>The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called \"dynamic features (DFs).\" This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be \"fraud-proof,\" because these DFs can only be extracted from living irises. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1072-82"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2186125","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30505061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-05-07DOI: 10.1109/TSMCB.2012.2194485
S W Chew, P Lucey, S Lucey, J Saragih, J F Cohn, I Matthews, S Sridharan
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
{"title":"In the Pursuit of Effective Affective Computing: The Relationship Between Features and Registration.","authors":"S W Chew, P Lucey, S Lucey, J Saragih, J F Cohn, I Matthews, S Sridharan","doi":"10.1109/TSMCB.2012.2194485","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2194485","url":null,"abstract":"<p><p>For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a \"dense\" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a \"coarse\" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1006-16"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2194485","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30613393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-16DOI: 10.1109/TSMCB.2012.2187280
Shuo Wang, Xin Yao
Class imbalance problems have drawn growing interest recently because of their classification difficulty caused by the imbalanced class distributions. In particular, many ensemble methods have been proposed to deal with such imbalance. However, most efforts so far are only focused on two-class imbalance problems. There are unsolved issues in multiclass imbalance problems, which exist in real-world applications. This paper studies the challenges posed by the multiclass imbalance problems and investigates the generalization ability of some ensemble solutions, including our recently proposed algorithm AdaBoost.NC, with the aim of handling multiclass and imbalance effectively and directly. We first study the impact of multiminority and multimajority on the performance of two basic resampling techniques. They both present strong negative effects. "Multimajority" tends to be more harmful to the generalization performance. Motivated by the results, we then apply AdaBoost.NC to several real-world multiclass imbalance tasks and compare it to other popular ensemble methods. AdaBoost.NC is shown to be better at recognizing minority class examples and balancing the performance among classes in terms of G-mean without using any class decomposition.
{"title":"Multiclass Imbalance Problems: Analysis and Potential Solutions.","authors":"Shuo Wang, Xin Yao","doi":"10.1109/TSMCB.2012.2187280","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2187280","url":null,"abstract":"<p><p>Class imbalance problems have drawn growing interest recently because of their classification difficulty caused by the imbalanced class distributions. In particular, many ensemble methods have been proposed to deal with such imbalance. However, most efforts so far are only focused on two-class imbalance problems. There are unsolved issues in multiclass imbalance problems, which exist in real-world applications. This paper studies the challenges posed by the multiclass imbalance problems and investigates the generalization ability of some ensemble solutions, including our recently proposed algorithm AdaBoost.NC, with the aim of handling multiclass and imbalance effectively and directly. We first study the impact of multiminority and multimajority on the performance of two basic resampling techniques. They both present strong negative effects. \"Multimajority\" tends to be more harmful to the generalization performance. Motivated by the results, we then apply AdaBoost.NC to several real-world multiclass imbalance tasks and compare it to other popular ensemble methods. AdaBoost.NC is shown to be better at recognizing minority class examples and balancing the performance among classes in terms of G-mean without using any class decomposition. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1119-30"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2187280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30519103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-15DOI: 10.1109/TSMCB.2012.2188509
Rui Xu, Jie Xu, D C Wunsch
Swarm intelligence has emerged as a worthwhile class of clustering methods due to its convenient implementation, parallel capability, ability to avoid local minima, and other advantages. In such applications, clustering validity indices usually operate as fitness functions to evaluate the qualities of the obtained clusters. However, as the validity indices are usually data dependent and are designed to address certain types of data, the selection of different indices as the fitness functions may critically affect cluster quality. Here, we compare the performances of eight well-known and widely used clustering validity indices, namely, the Caliński-Harabasz index, the CS index, the Davies-Bouldin index, the Dunn index with two of its generalized versions, the I index, and the silhouette statistic index, on both synthetic and real data sets in the framework of differential-evolution-particle-swarm-optimization (DEPSO)-based clustering. DEPSO is a hybrid evolutionary algorithm of the stochastic optimization approach (differential evolution) and the swarm intelligence method (particle swarm optimization) that further increases the search capability and achieves higher flexibility in exploring the problem space. According to the experimental results, we find that the silhouette statistic index stands out in most of the data sets that we examined. Meanwhile, we suggest that users reach their conclusions not just based on only one index, but after considering the results of several indices to achieve reliable clustering structures.
{"title":"A Comparison Study of Validity Indices on Swarm-Intelligence-Based Clustering.","authors":"Rui Xu, Jie Xu, D C Wunsch","doi":"10.1109/TSMCB.2012.2188509","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2188509","url":null,"abstract":"<p><p>Swarm intelligence has emerged as a worthwhile class of clustering methods due to its convenient implementation, parallel capability, ability to avoid local minima, and other advantages. In such applications, clustering validity indices usually operate as fitness functions to evaluate the qualities of the obtained clusters. However, as the validity indices are usually data dependent and are designed to address certain types of data, the selection of different indices as the fitness functions may critically affect cluster quality. Here, we compare the performances of eight well-known and widely used clustering validity indices, namely, the Caliński-Harabasz index, the CS index, the Davies-Bouldin index, the Dunn index with two of its generalized versions, the I index, and the silhouette statistic index, on both synthetic and real data sets in the framework of differential-evolution-particle-swarm-optimization (DEPSO)-based clustering. DEPSO is a hybrid evolutionary algorithm of the stochastic optimization approach (differential evolution) and the swarm intelligence method (particle swarm optimization) that further increases the search capability and achieves higher flexibility in exploring the problem space. According to the experimental results, we find that the silhouette statistic index stands out in most of the data sets that we examined. Meanwhile, we suggest that users reach their conclusions not just based on only one index, but after considering the results of several indices to achieve reliable clustering structures. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1243-56"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2188509","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30519104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-04-03DOI: 10.1109/TSMCB.2012.2188507
S Ulbrich, V R de Angulo, T Asfour, C Torras, R Dillmann
The kinematics of a robot with many degrees of freedom is a very complex function. Learning this function for a large workspace with a good precision requires a huge number of training samples, i.e., robot movements. In this paper, we introduce the Kinematic Bézier Map (KB-Map), a parameterizable model without the generality of other systems but whose structure readily incorporates some of the geometric constraints of a kinematic function. In this way, the number of training samples required is drastically reduced. Moreover, the simplicity of the model reduces learning to solving a linear least squares problem. Systematic experiments have been carried out showing the excellent interpolation and extrapolation capabilities of KB-Maps and their relatively low sensitivity to noise.
{"title":"Kinematic Bézier Maps.","authors":"S Ulbrich, V R de Angulo, T Asfour, C Torras, R Dillmann","doi":"10.1109/TSMCB.2012.2188507","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2188507","url":null,"abstract":"<p><p>The kinematics of a robot with many degrees of freedom is a very complex function. Learning this function for a large workspace with a good precision requires a huge number of training samples, i.e., robot movements. In this paper, we introduce the Kinematic Bézier Map (KB-Map), a parameterizable model without the generality of other systems but whose structure readily incorporates some of the geometric constraints of a kinematic function. In this way, the number of training samples required is drastically reduced. Moreover, the simplicity of the model reduces learning to solving a linear least squares problem. Systematic experiments have been carried out showing the excellent interpolation and extrapolation capabilities of KB-Maps and their relatively low sensitivity to noise. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1215-30"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2188507","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30557112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-05-07DOI: 10.1109/TSMCB.2012.2192269
Songfan Yang, B Bhanu
Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.
{"title":"Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.","authors":"Songfan Yang, B Bhanu","doi":"10.1109/TSMCB.2012.2192269","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2192269","url":null,"abstract":"<p><p>Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"980-92"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2192269","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30613391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-09DOI: 10.1109/TSMCB.2012.2185694
A Vakanski, I Mantegh, A Irish, F Janabi-Sharifi
The main objective of this paper is to develop an efficient method for learning and reproduction of complex trajectories for robot programming by demonstration. Encoding of the demonstrated trajectories is performed with hidden Markov model, and generation of a generalized trajectory is achieved by using the concept of key points. Identification of the key points is based on significant changes in position and velocity in the demonstrated trajectories. The resulting sequences of trajectory key points are temporally aligned using the multidimensional dynamic time warping algorithm, and a generalized trajectory is obtained by smoothing spline interpolation of the clustered key points. The principal advantage of our proposed approach is utilization of the trajectory key points from all demonstrations for generation of a generalized trajectory. In addition, variability of the key points' clusters across the demonstrated set is employed for assigning weighting coefficients, resulting in a generalization procedure which accounts for the relevance of reproduction of different parts of the trajectories. The approach is verified experimentally for trajectories with two different levels of complexity.
{"title":"Trajectory Learning for Robot Programming by Demonstration Using Hidden Markov Model and Dynamic Time Warping.","authors":"A Vakanski, I Mantegh, A Irish, F Janabi-Sharifi","doi":"10.1109/TSMCB.2012.2185694","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2185694","url":null,"abstract":"The main objective of this paper is to develop an efficient method for learning and reproduction of complex trajectories for robot programming by demonstration. Encoding of the demonstrated trajectories is performed with hidden Markov model, and generation of a generalized trajectory is achieved by using the concept of key points. Identification of the key points is based on significant changes in position and velocity in the demonstrated trajectories. The resulting sequences of trajectory key points are temporally aligned using the multidimensional dynamic time warping algorithm, and a generalized trajectory is obtained by smoothing spline interpolation of the clustered key points. The principal advantage of our proposed approach is utilization of the trajectory key points from all demonstrations for generation of a generalized trajectory. In addition, variability of the key points' clusters across the demonstrated set is employed for assigning weighting coefficients, resulting in a generalization procedure which accounts for the relevance of reproduction of different parts of the trajectories. The approach is verified experimentally for trajectories with two different levels of complexity.","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1039-52"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2185694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40159260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-09DOI: 10.1109/TSMCB.2012.2186611
S Sarkar, K Mukherjee, A Ray, A Srivastav, T A Wettergren
This paper presents the qualitative nature of communication network operations as abstraction of typical thermodynamic parameters (e.g., order parameter, temperature, and pressure). Specifically, statistical mechanics-inspired models of critical phenomena (e.g., phase transitions and size scaling) for heterogeneous packet transmission are developed in terms of multiple intensive parameters, namely, the external packet load on the network system and the packet transmission probabilities of heterogeneous packet types. Network phase diagrams are constructed based on these traffic parameters, and decision and control strategies are formulated for heterogeneous packet transmission in the network system. In this context, decision functions and control objectives are derived in closed forms, and the pertinent results of test and validation on a simulated network system are presented.
{"title":"Statistical Mechanics-Inspired Modeling of Heterogeneous Packet Transmission in Communication Networks.","authors":"S Sarkar, K Mukherjee, A Ray, A Srivastav, T A Wettergren","doi":"10.1109/TSMCB.2012.2186611","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2186611","url":null,"abstract":"<p><p>This paper presents the qualitative nature of communication network operations as abstraction of typical thermodynamic parameters (e.g., order parameter, temperature, and pressure). Specifically, statistical mechanics-inspired models of critical phenomena (e.g., phase transitions and size scaling) for heterogeneous packet transmission are developed in terms of multiple intensive parameters, namely, the external packet load on the network system and the packet transmission probabilities of heterogeneous packet types. Network phase diagrams are constructed based on these traffic parameters, and decision and control strategies are formulated for heterogeneous packet transmission in the network system. In this context, decision functions and control objectives are derived in closed forms, and the pertinent results of test and validation on a simulated network system are presented. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1083-94"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2186611","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40159263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-03-14DOI: 10.1109/TSMCB.2012.2187442
Qing Gao, Xiao-Jun Zeng, Gang Feng, Yong Wang, Jianbin Qiu
This paper presents a novel approach to control general nonlinear systems based on Takagi-Sugeno (T-S) fuzzy dynamic models. It is first shown that a general nonlinear system can be approximated by a generalized T-S fuzzy model to any degree of accuracy on any compact set. It is then shown that the stabilization problem of the general nonlinear system can be solved as a robust stabilization problem of the developed T-S fuzzy system with the approximation errors as the uncertainty term. Based on a piecewise quadratic Lyapunov function, the robust semiglobal stabilization and H∞ control of the general nonlinear system are formulated in the form of linear matrix inequalities. Simulation results are provided to illustrate the effectiveness of the proposed approaches.
{"title":"T-S-Fuzzy-Model-Based Approximation and Controller Design for General Nonlinear Systems.","authors":"Qing Gao, Xiao-Jun Zeng, Gang Feng, Yong Wang, Jianbin Qiu","doi":"10.1109/TSMCB.2012.2187442","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2187442","url":null,"abstract":"<p><p>This paper presents a novel approach to control general nonlinear systems based on Takagi-Sugeno (T-S) fuzzy dynamic models. It is first shown that a general nonlinear system can be approximated by a generalized T-S fuzzy model to any degree of accuracy on any compact set. It is then shown that the stabilization problem of the general nonlinear system can be solved as a robust stabilization problem of the developed T-S fuzzy system with the approximation errors as the uncertainty term. Based on a piecewise quadratic Lyapunov function, the robust semiglobal stabilization and H∞ control of the general nonlinear system are formulated in the form of linear matrix inequalities. Simulation results are provided to illustrate the effectiveness of the proposed approaches. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1143-54"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2187442","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30516844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-08-01Epub Date: 2012-02-10DOI: 10.1109/TSMCB.2012.2185843
K Kiguchi, Y Hayashi
Many kinds of power-assist robots have been developed in order to assist self-rehabilitation and/or daily life motions of physically weak persons. Several kinds of control methods have been proposed to control the power-assist robots according to user's motion intention. In this paper, an electromyogram (EMG)-based impedance control method for an upper-limb power-assist exoskeleton robot is proposed to control the robot in accordance with the user's motion intention. The proposed method is simple, easy to design, humanlike, and adaptable to any user. A neurofuzzy matrix modifier is applied to make the controller adaptable to any users. Not only the characteristics of EMG signals but also the characteristics of human body are taken into account in the proposed method. The effectiveness of the proposed method was evaluated by the experiments.
{"title":"An EMG-Based Control for an Upper-Limb Power-Assist Exoskeleton Robot.","authors":"K Kiguchi, Y Hayashi","doi":"10.1109/TSMCB.2012.2185843","DOIUrl":"https://doi.org/10.1109/TSMCB.2012.2185843","url":null,"abstract":"<p><p>Many kinds of power-assist robots have been developed in order to assist self-rehabilitation and/or daily life motions of physically weak persons. Several kinds of control methods have been proposed to control the power-assist robots according to user's motion intention. In this paper, an electromyogram (EMG)-based impedance control method for an upper-limb power-assist exoskeleton robot is proposed to control the robot in accordance with the user's motion intention. The proposed method is simple, easy to design, humanlike, and adaptable to any user. A neurofuzzy matrix modifier is applied to make the controller adaptable to any users. Not only the characteristics of EMG signals but also the characteristics of human body are taken into account in the proposed method. The effectiveness of the proposed method was evaluated by the experiments. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1064-71"},"PeriodicalIF":0.0,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2185843","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30459271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}