Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH17
Hongxing Li, C. L. P. Chen, Han-Pang Huang
{"title":"Application of Neuro-Fuzzy Systems: Development of a Fuzzy Learning Decision Tree and Application to Tactile Recognition","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH17","DOIUrl":"https://doi.org/10.1201/9781420057997.CH17","url":null,"abstract":"","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129245923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH18
Hongxing Li, C. L. P. Chen, Han-Pang Huang
In recent years, cerebrovascular accidents have become a very serious disease in our society. How to assess the states of cerebrovascular accident (CVA) patients and rehabilitate them is very important. Therapists train CVA patients according to the functional activities they need in their daily lives. During the rehabilitative therapeutic activities, the assessment of motor control ability for CVA patients is very important. In this chapter, a fuzzy diagnostic system is developed to evaluate the motor control ability of CVA patients. The CVA patients will be analyzed according to the motor control abilities defined by kinetic signals. The kinetic signals are fed into the proposed fuzzy diagnostic system to assess the global control ability and compare with the FIM (Functional Independent Measurement) score, which is a clinical index for assessing the states of CVA patients in hospitals. It is shown that the proposed fuzzy diagnostic system can precisely assess the motor control ability of CVA patients.
{"title":"Fuzzy Assessment Systems of Rehabilitative Process for CVA Patients","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH18","DOIUrl":"https://doi.org/10.1201/9781420057997.CH18","url":null,"abstract":"In recent years, cerebrovascular accidents have become a very serious disease in our society. How to assess the states of cerebrovascular accident (CVA) patients and rehabilitate them is very important. Therapists train CVA patients according to the functional activities they need in their daily lives. During the rehabilitative therapeutic activities, the assessment of motor control ability for CVA patients is very important. In this chapter, a fuzzy diagnostic system is developed to evaluate the motor control ability of CVA patients. The CVA patients will be analyzed according to the motor control abilities defined by kinetic signals. The kinetic signals are fed into the proposed fuzzy diagnostic system to assess the global control ability and compare with the FIM (Functional Independent Measurement) score, which is a clinical index for assessing the states of CVA patients in hospitals. It is shown that the proposed fuzzy diagnostic system can precisely assess the motor control ability of CVA patients.","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121130070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH19
Hongxing Li, C. L. P. Chen, Han-Pang Huang
{"title":"A DSP-based Neural Controller for a Multi-degree Prosthetic Hand","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH19","DOIUrl":"https://doi.org/10.1201/9781420057997.CH19","url":null,"abstract":"","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131122611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.ch3
Hongxing Li, C. L. P. Chen, Han-Pang Huang
In this chapter, we first introduce the mathematical model and structure of artificial neurons. After that, we consider several artificial neural networks that assemble the neurons. This chapter does not intend to explain details of biological neurons. Instead, we only focus on artificial neurons that simply extract the abstract operation of biological neurons and their mathematical models. We start with an introduction in Section 1, followed by the discussion of neuron models in Section
{"title":"Mathematical Essence and Structures of Feedforward Artificial Neural Networks","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.ch3","DOIUrl":"https://doi.org/10.1201/9781420057997.ch3","url":null,"abstract":"In this chapter, we first introduce the mathematical model and structure of artificial neurons. After that, we consider several artificial neural networks that assemble the neurons. This chapter does not intend to explain details of biological neurons. Instead, we only focus on artificial neurons that simply extract the abstract operation of biological neurons and their mathematical models. We start with an introduction in Section 1, followed by the discussion of neuron models in Section","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122973190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH14
Hongxing Li, C. L. P. Chen, Han-Pang Huang
This chapter discusses the foundation of neuro-fuzzy systems. First, we introduce Takagi, Sugeno, and Kang (TSK) fuzzy model [l,2] and its difference from the Mamdani model. Under the idea of TSK fuzzy model, we discuss a neuro-fuzzy system architecture: Adaptive Network-based Fuzzy Inference System (ANFIS) that is developed by Jang [3]. This model allows the fuzzy systems to learn the parameters adaptively. By using a hybrid learning algorithm, the ANFIS can construct an input-output mapping based on both human knowledge and numerical data. Finally, the ANFIS architecture is employed for an engineering example an IC fabrication time estimation. The result is compared with other different algorithms: Gauss-Newton-based Levenberg-Marquardt algorithm (GN algorithm), and backpropagation of neural network (BPNN) algorithm. Comparing these two methods, the ANFIS algorithm gives the most accurate prediction result at the expense of the highest computation cost. Besides, the adaptation of fuzzy inference system provides more physical insights for engineers to understand the relationship between the parameters.
本章讨论神经模糊系统的基础。首先,我们介绍Takagi, Sugeno, and Kang (TSK)模糊模型[1,2]及其与Mamdani模型的区别。在TSK模糊模型的思想下,我们讨论了一种神经模糊系统架构:Jang[3]开发的自适应网络模糊推理系统(Adaptive Network-based fuzzy Inference system, ANFIS)。该模型允许模糊系统自适应学习参数。通过混合学习算法,ANFIS可以构建基于人类知识和数值数据的输入输出映射。最后,将ANFIS体系结构应用于集成电路制造时间估计的工程实例。结果与其他不同的算法:基于高斯-牛顿的Levenberg-Marquardt算法(GN算法)和神经网络反向传播(BPNN)算法进行比较。对比两种方法,ANFIS算法以最高的计算代价给出了最准确的预测结果。此外,模糊推理系统的自适应为工程师理解参数之间的关系提供了更多的物理视角。
{"title":"Foundation of Neuro-Fuzzy Systems and an Engineering Application","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH14","DOIUrl":"https://doi.org/10.1201/9781420057997.CH14","url":null,"abstract":"This chapter discusses the foundation of neuro-fuzzy systems. First, we introduce Takagi, Sugeno, and Kang (TSK) fuzzy model [l,2] and its difference from the Mamdani model. Under the idea of TSK fuzzy model, we discuss a neuro-fuzzy system architecture: Adaptive Network-based Fuzzy Inference System (ANFIS) that is developed by Jang [3]. This model allows the fuzzy systems to learn the parameters adaptively. By using a hybrid learning algorithm, the ANFIS can construct an input-output mapping based on both human knowledge and numerical data. Finally, the ANFIS architecture is employed for an engineering example an IC fabrication time estimation. The result is compared with other different algorithms: Gauss-Newton-based Levenberg-Marquardt algorithm (GN algorithm), and backpropagation of neural network (BPNN) algorithm. Comparing these two methods, the ANFIS algorithm gives the most accurate prediction result at the expense of the highest computation cost. Besides, the adaptation of fuzzy inference system provides more physical insights for engineers to understand the relationship between the parameters.","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116711470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH5
Hongxing Li, C. L. P. Chen, Han-Pang Huang
In this chapter, we will introduce flat neural networks architecture. The system equations of flat neural networks can be formulated as a linear system. In this way, the performance index is a quadratic form of the weights, and the weights of the networks can be solved easily using a linear least-square method. Even though they have a linear-system-equations-like equation, the flat neural networks are also perfect for approximating non-linear functions. A fast learning algorithm is given to find an optimal weight of the flat neural networks. This formulation makes it easier to update the weights instantly for both a newly added input and a newly added node. A dynamic stepwise updating algorithm is given to update thc weights of the system instantly. Finally, we give several examples of applications of the flat neural networks, such as an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a non-linear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the flat neural networks are very attractive to real-time processes. 5.1 Introduction Feedforward artificial neural networks have been a popular research subject recently. The research topics vary from the theoretical view of learning algorithms such as learning and generalization properties of the networks to a variety of applications in control, classification, biomedical, manufacturing, and business forecasting, etc. The backpropagation (BP) supervised learning algorithm is one of the most popular learning algorithms being developed for layered networks [l-21. Improving the learning speed of BP and increasing the generalization capability ofthe networks have played a center role in neural network research [3-91. Apart from multi-layer network architectures and the BP algorithm, various simplified architectures or different non-linear activation functions have been devised. Among those, so-called flat networks including functional-link neural networks and radial basis function networks have been proposed [lo-151. These flat networks remove the drawback
{"title":"Flat Neural Networks and Rapid Learning Algorithms","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH5","DOIUrl":"https://doi.org/10.1201/9781420057997.CH5","url":null,"abstract":"In this chapter, we will introduce flat neural networks architecture. The system equations of flat neural networks can be formulated as a linear system. In this way, the performance index is a quadratic form of the weights, and the weights of the networks can be solved easily using a linear least-square method. Even though they have a linear-system-equations-like equation, the flat neural networks are also perfect for approximating non-linear functions. A fast learning algorithm is given to find an optimal weight of the flat neural networks. This formulation makes it easier to update the weights instantly for both a newly added input and a newly added node. A dynamic stepwise updating algorithm is given to update thc weights of the system instantly. Finally, we give several examples of applications of the flat neural networks, such as an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a non-linear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the flat neural networks are very attractive to real-time processes. 5.1 Introduction Feedforward artificial neural networks have been a popular research subject recently. The research topics vary from the theoretical view of learning algorithms such as learning and generalization properties of the networks to a variety of applications in control, classification, biomedical, manufacturing, and business forecasting, etc. The backpropagation (BP) supervised learning algorithm is one of the most popular learning algorithms being developed for layered networks [l-21. Improving the learning speed of BP and increasing the generalization capability ofthe networks have played a center role in neural network research [3-91. Apart from multi-layer network architectures and the BP algorithm, various simplified architectures or different non-linear activation functions have been devised. Among those, so-called flat networks including functional-link neural networks and radial basis function networks have been proposed [lo-151. These flat networks remove the drawback","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115954390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH7
Hongxing Li, C. L. P. Chen, Han-Pang Huang
This chapter focuses on mathematical essence and structures of neural networks and fuzzy neural networks, especially on discrete feedback neural networks. We begin with review of Hopfield networks and discuss the mathematical essence and the structures of discrete feedback neural networks. First, we discuss a general criterion on the stability of networks, and we show that the energy function commonly used can be regarded as a special case of the criterion. Second, we show that the stable points of a network can be converted as the fixed points of some function, and the weight matrix of the feedback neural networks can be solved from a group of systems of linear equations. Last, we point out the mathematical base of the outer-product learning method and give several examples of designing weight matrices based on multifactorial functions. In previous chapters, we have discussed in detail the mathematical essence and structures of feedforward neural networks. Here, we study the mathematical essence and structures of feedback neural networks, namely, the Hopfield networks [l]. illustrates a single-layer Hopfield net with n neurons, where are outer input variables, which usually are treated as " the first impetus " , then they are removed and the network will continue to evolve itself. connection weights, wij = wji and wii=O. The activation functions of the neurons are denoted by cpi, where the threshold values are 8i.
{"title":"Mathematical Essence and Structures of Feedback Neural Networks and Weight Matrix Design","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH7","DOIUrl":"https://doi.org/10.1201/9781420057997.CH7","url":null,"abstract":"This chapter focuses on mathematical essence and structures of neural networks and fuzzy neural networks, especially on discrete feedback neural networks. We begin with review of Hopfield networks and discuss the mathematical essence and the structures of discrete feedback neural networks. First, we discuss a general criterion on the stability of networks, and we show that the energy function commonly used can be regarded as a special case of the criterion. Second, we show that the stable points of a network can be converted as the fixed points of some function, and the weight matrix of the feedback neural networks can be solved from a group of systems of linear equations. Last, we point out the mathematical base of the outer-product learning method and give several examples of designing weight matrices based on multifactorial functions. In previous chapters, we have discussed in detail the mathematical essence and structures of feedforward neural networks. Here, we study the mathematical essence and structures of feedback neural networks, namely, the Hopfield networks [l]. illustrates a single-layer Hopfield net with n neurons, where are outer input variables, which usually are treated as \" the first impetus \" , then they are removed and the network will continue to evolve itself. connection weights, wij = wji and wii=O. The activation functions of the neurons are denoted by cpi, where the threshold values are 8i.","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123295605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-09-21DOI: 10.1201/9781420057997.CH6
Hongxing Li, C. L. P. Chen, Han-Pang Huang
In this chapter we shall discuss the structure of fuzzy neural networks. We start with general definitions of multifactorial functions. And we show that a fuzzy neu-ron can be formulated by means of standard multifactorial function. We also give definitions of a fuzzy neural network based on fuzzy relationship and fuzzy neurons. Finally, we describe a learning algorithm for a fuzzy neural network based on V and A operations. 6.1 Definition of Fuzzy Neurons Neural networks alone have demonstrated their ability to classify, recall, and associate information [l]. In this chapter, we shall incorporate fuzziness to the networks. The objective to include the fuzziness is to extend the capability of the neural networks to handle " vague " information than " crisp " information only. Previous work has shown that fuzzy neural networks have achieved some level of success both fundamentally and practically [l-lo]. As indicated in reference [l], there are several ways to classify fuzzy neural networks: (1) a fuzzy neuron with crisp signals used to evaluate fuzzy weights, (2) a fuzzy neuron with fuzzy signals which is combined with fuzzy weights, and (3) a fuzzy neuron described by fuzzy logic equations. In this chapter, we shall discuss a fuzzy neural network where both inputs and outputs can be either a crisp value or a fuzzy set. To do this we shall first introduce multifactorial function [ll, 121. We have pointed out from Chapter 4 that one of the basic functions of neurons is that the input to a neuron is synthesized first, then activated, where the basic operators to be used as synthesizing are " + " and ". " denoted by (+, .) and called synthetic operators. However, there are divers styles operators will be multifactorial functions, so we now briefly introduce the concept of multifactorial functions. In [0, lIm, a natural partial ordering " 5 " is defined as follows: A multifactorial function is actually a projective mapping from an rn-ary space to a
{"title":"Basic Structure of Fuzzy Neural Networks","authors":"Hongxing Li, C. L. P. Chen, Han-Pang Huang","doi":"10.1201/9781420057997.CH6","DOIUrl":"https://doi.org/10.1201/9781420057997.CH6","url":null,"abstract":"In this chapter we shall discuss the structure of fuzzy neural networks. We start with general definitions of multifactorial functions. And we show that a fuzzy neu-ron can be formulated by means of standard multifactorial function. We also give definitions of a fuzzy neural network based on fuzzy relationship and fuzzy neurons. Finally, we describe a learning algorithm for a fuzzy neural network based on V and A operations. 6.1 Definition of Fuzzy Neurons Neural networks alone have demonstrated their ability to classify, recall, and associate information [l]. In this chapter, we shall incorporate fuzziness to the networks. The objective to include the fuzziness is to extend the capability of the neural networks to handle \" vague \" information than \" crisp \" information only. Previous work has shown that fuzzy neural networks have achieved some level of success both fundamentally and practically [l-lo]. As indicated in reference [l], there are several ways to classify fuzzy neural networks: (1) a fuzzy neuron with crisp signals used to evaluate fuzzy weights, (2) a fuzzy neuron with fuzzy signals which is combined with fuzzy weights, and (3) a fuzzy neuron described by fuzzy logic equations. In this chapter, we shall discuss a fuzzy neural network where both inputs and outputs can be either a crisp value or a fuzzy set. To do this we shall first introduce multifactorial function [ll, 121. We have pointed out from Chapter 4 that one of the basic functions of neurons is that the input to a neuron is synthesized first, then activated, where the basic operators to be used as synthesizing are \" + \" and \". \" denoted by (+, .) and called synthetic operators. However, there are divers styles operators will be multifactorial functions, so we now briefly introduce the concept of multifactorial functions. In [0, lIm, a natural partial ordering \" 5 \" is defined as follows: A multifactorial function is actually a projective mapping from an rn-ary space to a","PeriodicalId":239984,"journal":{"name":"Fuzzy Neural Intelligent Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127374910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}