Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636855
P. Mills, M. Tadé, A. Zomaya
Using Reinforcement Learning (RL) methods, neural networks can learn a task with only the feedback of a single performance scalar. While this makes it applicable to identification and control tasks, existing RL algorithms suffer from limitations such as binary outputs, poor control of random variation, and susceptibility to local minima of the performance function, which limit their practical application. A new hybrid RL algorithm which addresses these limitations is proposed. The application of the algorithm for identification is discussed and demonstrated with a simulation of the identification of a non-linear static function. An effective method for on-line convergence enhancement is also discussed and demonstrated.
{"title":"A Hybrid Reinforcement Learning System for Identification and Control","authors":"P. Mills, M. Tadé, A. Zomaya","doi":"10.1109/AIHAS.1992.636855","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636855","url":null,"abstract":"Using Reinforcement Learning (RL) methods, neural networks can learn a task with only the feedback of a single performance scalar. While this makes it applicable to identification and control tasks, existing RL algorithms suffer from limitations such as binary outputs, poor control of random variation, and susceptibility to local minima of the performance function, which limit their practical application. A new hybrid RL algorithm which addresses these limitations is proposed. The application of the algorithm for identification is discussed and demonstrated with a simulation of the identification of a non-linear static function. An effective method for on-line convergence enhancement is also discussed and demonstrated.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125717638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636891
M. Knick, F. Radermacher
In the Autonomous m b i l e Systems project, the FAW uses a mobile robot to study questions related t o the d e e p integration of sub-symbolic and symbolic information processing. AMOS aims at methods for autonomously acquiring new concepts via induction from the environment. AMOS is (deliberately) equipped with an incomplete model of itself and of the environment. The robot plans its actions in order t o perform certain $asks, e.g. visitiny certain locations. The successful execution of a plan results in positive reinforcement. When AMOS recognizes substantial differences between expectation and observation, it will collect and classify the available sensor information. FAW uses its sub-symbolic image processing system ALIAS to event,ually translate collections of such information into a new concept. Such a concept is then integrated into the symbolic world model of AMOS to improve the robot’s performance, while at the same time providing feedback concerning the appropriateness of concepts learned. 1 Basic Ideas behind the Project The conceptual outline of the project is motivated by aspects of the evolution of life on earth. In the course of evolution, sub-symbolic forms of information processing via neural networks have been of central importance. Essential steps have been the creation of mechanisms which are able to process sensor information (such as pixel images or other data streams) as a basis for behavior control. These steps can, to some extent, be interpreted as early forms of implicit concept generation. Based on collective learnirg system theory, the FAW projects ALIAS and ALA” have demonstrated, supplementary to other connectionistic approaches in this field, the ability to generate concepts carrying semantics in a static environment using a simple organizing principle: spatial neighborhood in images. This reflects one of the laws of ”Gestalt” which was long ago discovered by psychologist,s. The basis for an intelligent behavior of systems has gradually improved over the course of evolution. as the level of a mere processing of stimulus-response patterns was surpassed, and more abstract principles of identifying, organizing, and processing of such patterns emerged [all. Usually, one tries to capture and describe this more abstract level by notions such as classes, categorzes, or the notion of symbol and respective forms of information processing (e.g. , logical inferences). The gradual transition to ever broader forms of symbol processing can be seen as the decisive step in very compact forms of information coding and processing, which are, nevertheless, biologically realizable within a neural network (corresponding to the observation that most types of artificial neural networks allow, among other things, the emulation of (finite) Turing machines, cf. also [23] [24]). In spite of this importance of symbol processing, even today, the quite rare process of generating genuine new concepts (which is considered one of the most sophisti
{"title":"Integration of Sub-Symbolic and Symbolic Information Processing in Robot Control","authors":"M. Knick, F. Radermacher","doi":"10.1109/AIHAS.1992.636891","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636891","url":null,"abstract":"In the Autonomous m b i l e Systems project, the FAW uses a mobile robot to study questions related t o the d e e p integration of sub-symbolic and symbolic information processing. AMOS aims at methods for autonomously acquiring new concepts via induction from the environment. AMOS is (deliberately) equipped with an incomplete model of itself and of the environment. The robot plans its actions in order t o perform certain $asks, e.g. visitiny certain locations. The successful execution of a plan results in positive reinforcement. When AMOS recognizes substantial differences between expectation and observation, it will collect and classify the available sensor information. FAW uses its sub-symbolic image processing system ALIAS to event,ually translate collections of such information into a new concept. Such a concept is then integrated into the symbolic world model of AMOS to improve the robot’s performance, while at the same time providing feedback concerning the appropriateness of concepts learned. 1 Basic Ideas behind the Project The conceptual outline of the project is motivated by aspects of the evolution of life on earth. In the course of evolution, sub-symbolic forms of information processing via neural networks have been of central importance. Essential steps have been the creation of mechanisms which are able to process sensor information (such as pixel images or other data streams) as a basis for behavior control. These steps can, to some extent, be interpreted as early forms of implicit concept generation. Based on collective learnirg system theory, the FAW projects ALIAS and ALA” have demonstrated, supplementary to other connectionistic approaches in this field, the ability to generate concepts carrying semantics in a static environment using a simple organizing principle: spatial neighborhood in images. This reflects one of the laws of ”Gestalt” which was long ago discovered by psychologist,s. The basis for an intelligent behavior of systems has gradually improved over the course of evolution. as the level of a mere processing of stimulus-response patterns was surpassed, and more abstract principles of identifying, organizing, and processing of such patterns emerged [all. Usually, one tries to capture and describe this more abstract level by notions such as classes, categorzes, or the notion of symbol and respective forms of information processing (e.g. , logical inferences). The gradual transition to ever broader forms of symbol processing can be seen as the decisive step in very compact forms of information coding and processing, which are, nevertheless, biologically realizable within a neural network (corresponding to the observation that most types of artificial neural networks allow, among other things, the emulation of (finite) Turing machines, cf. also [23] [24]). In spite of this importance of symbol processing, even today, the quite rare process of generating genuine new concepts (which is considered one of the most sophisti","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130570583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636873
N. Baid, N. Nagarur
In the present study, an effort is made to achieve integration of simulation and production decision support systems. A three level interdependent planning hierarchy is considered for decision making in a manufacturing environment. An Intelligent Simulation System (ISS), which contains three modules viz. intelligent front end, simulator and intelligent back end, is developed and is linked to an upper level MRP module. SIMANNis usedfor the simulator, while rest of the system is implemented in MS-FORTRAN.
{"title":"Intelligent Simulation for Manufacturing Systems","authors":"N. Baid, N. Nagarur","doi":"10.1109/AIHAS.1992.636873","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636873","url":null,"abstract":"In the present study, an effort is made to achieve integration of simulation and production decision support systems. A three level interdependent planning hierarchy is considered for decision making in a manufacturing environment. An Intelligent Simulation System (ISS), which contains three modules viz. intelligent front end, simulator and intelligent back end, is developed and is linked to an upper level MRP module. SIMANNis usedfor the simulator, while rest of the system is implemented in MS-FORTRAN.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134514436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636890
O. Monga, S. Benayoun
Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3D structures. The next stage is to characterize the local geometry of these surfaces in order to extract points or lines which may be used by registration and tracking procedures. Typically one must calculate second-order differential characteristics of the surfaces such as the maximum, mean, and Gaussian curvature. The classical approach is to use local surface fitting, thereby confronting the problem of establishing links between 3D edge detection and local surface approximation. To avoid this problem, we propose to compute the curvatures at locations designated as edge points using directly the partial derivatives of the image. By assuming that the surface is defined locally by a isointensity contour (i.e., the 3D gradient at an edge point corresponds to the normal to the surface), one can calculate directly the curvatures and characterize the local curvature extrema (ridge points) from the first, second, and third derivatives of the gray level function. These partial derivatives can be computed using the operators of the edge detection. In the more general case where the contours are not isocontours (i.e., the gradient at an edge point only appoximates the normal to the surface), the only differential invariants of the image are in R4. This leads us to treat the 3D image as a hypersurface (a three-dimensional manifold) in R4. We give the relationships between the curvatures of the hypersurface and the curvatures of the surface defined by edge points. The maximum curvature at a point on the hypersurface depends on the second partial derivatives of the 3D image. We note that it may be more efficient to smooth the data in R4. Moreover, this approach could also be used to detect corners of vertices. We present experimental results obtained using real data (X ray scanner data) and applying these two methods. As an example of the stability, we extract ridge lines in two 3D X ray scanner data of a skull taken in different positions.
{"title":"Using partial Derivatives of 3D images to extract typical surface features","authors":"O. Monga, S. Benayoun","doi":"10.1109/AIHAS.1992.636890","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636890","url":null,"abstract":"Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3D structures. The next stage is to characterize the local geometry of these surfaces in order to extract points or lines which may be used by registration and tracking procedures. Typically one must calculate second-order differential characteristics of the surfaces such as the maximum, mean, and Gaussian curvature. The classical approach is to use local surface fitting, thereby confronting the problem of establishing links between 3D edge detection and local surface approximation. To avoid this problem, we propose to compute the curvatures at locations designated as edge points using directly the partial derivatives of the image. By assuming that the surface is defined locally by a isointensity contour (i.e., the 3D gradient at an edge point corresponds to the normal to the surface), one can calculate directly the curvatures and characterize the local curvature extrema (ridge points) from the first, second, and third derivatives of the gray level function. These partial derivatives can be computed using the operators of the edge detection. In the more general case where the contours are not isocontours (i.e., the gradient at an edge point only appoximates the normal to the surface), the only differential invariants of the image are in R4. This leads us to treat the 3D image as a hypersurface (a three-dimensional manifold) in R4. We give the relationships between the curvatures of the hypersurface and the curvatures of the surface defined by edge points. The maximum curvature at a point on the hypersurface depends on the second partial derivatives of the 3D image. We note that it may be more efficient to smooth the data in R4. Moreover, this approach could also be used to detect corners of vertices. We present experimental results obtained using real data (X ray scanner data) and applying these two methods. As an example of the stability, we extract ridge lines in two 3D X ray scanner data of a skull taken in different positions.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132045863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636858
D. Luzeaux
{"title":"Incremental Rule-Based Control and Learning","authors":"D. Luzeaux","doi":"10.1109/AIHAS.1992.636858","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636858","url":null,"abstract":"","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122928309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636859
Alexander Sedlmeier, S. Bocionek, H. Weil
In this paper we propose a concept for a LEGO-like toolkit and a unique architecture for an autonomous factory. The toolkit faciliate a programming environment for developing, simulating and testing a manufacturing application. This significantly helps t o reduce programming times and results in a better software quality. The components of this toolkit are modelled b y intelligent units that run in parallel. The units exchange asynchronous messages according to a task-oriented service protocol. Such services hide the internals of the units completely and form the basis for flexible combination and exchange. Every unit consists of three parallel sub-processes: controller, planner and exception handler. The controller is responsible for performing a task. The planner must derive solutaons i f a service can’t be provided b y using predefined programs. The exception handler’s purpose is t o find and direct suitable reactions t o zncoming error messages. The intelligent behavior of each unit depends on the algorithms used b y the planner of each unit and on the contents of the common knowledge base.
{"title":"A Development Toolkit to Support the Architecture of Flexible Manufacturing Systems Based on Intelligent Autonomous Units","authors":"Alexander Sedlmeier, S. Bocionek, H. Weil","doi":"10.1109/AIHAS.1992.636859","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636859","url":null,"abstract":"In this paper we propose a concept for a LEGO-like toolkit and a unique architecture for an autonomous factory. The toolkit faciliate a programming environment for developing, simulating and testing a manufacturing application. This significantly helps t o reduce programming times and results in a better software quality. The components of this toolkit are modelled b y intelligent units that run in parallel. The units exchange asynchronous messages according to a task-oriented service protocol. Such services hide the internals of the units completely and form the basis for flexible combination and exchange. Every unit consists of three parallel sub-processes: controller, planner and exception handler. The controller is responsible for performing a task. The planner must derive solutaons i f a service can’t be provided b y using predefined programs. The exception handler’s purpose is t o find and direct suitable reactions t o zncoming error messages. The intelligent behavior of each unit depends on the algorithms used b y the planner of each unit and on the contents of the common knowledge base.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114929829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636875
A. Marriott, Toto Widyanto
This paper presents a method of applying a graphical locomotion model to a behavioural animation system. The locomotion models (actors) are driven by their motives and needs, aided by their visual perception systems: they are capable of detecting corners and edges of the environment so they can move without colliding into any obstacle. Each actor may regard other actors as being friendly or frightening, decisions may be made by the actors to approach, to avoid, to grasp, to eat. The graphical model must be capable of performing these actions in a realistic manner. The 2-0 nature of the behavioural animation system is implimented in 3-0 by assuming that the actors are anchored to the 2-0 plane. This still allows flexible locomotion for most models.
{"title":"Applying a Graphical Locomotion Model to a Behavioural Animation System","authors":"A. Marriott, Toto Widyanto","doi":"10.1109/AIHAS.1992.636875","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636875","url":null,"abstract":"This paper presents a method of applying a graphical locomotion model to a behavioural animation system. The locomotion models (actors) are driven by their motives and needs, aided by their visual perception systems: they are capable of detecting corners and edges of the environment so they can move without colliding into any obstacle. Each actor may regard other actors as being friendly or frightening, decisions may be made by the actors to approach, to avoid, to grasp, to eat. The graphical model must be capable of performing these actions in a realistic manner. The 2-0 nature of the behavioural animation system is implimented in 3-0 by assuming that the actors are anchored to the 2-0 plane. This still allows flexible locomotion for most models.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116849019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636860
J. Rozenblit, W. Jacak
In this paper, requirements for design of high autonomy manufacturing systems are stipulated. Effods towards amalgamating the autonomous architecture and its real world component (i.e., a flexible manufacturing system) are presented. Planning and control principles derived from discrete event modeling techniques are summarized.
{"title":"Towards Design and Control of High Autonomy Manufacturing Systems","authors":"J. Rozenblit, W. Jacak","doi":"10.1109/AIHAS.1992.636860","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636860","url":null,"abstract":"In this paper, requirements for design of high autonomy manufacturing systems are stipulated. Effods towards amalgamating the autonomous architecture and its real world component (i.e., a flexible manufacturing system) are presented. Planning and control principles derived from discrete event modeling techniques are summarized.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114178030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636864
B. Gorayska, R. Lindsay, K.R. Cox, J. Marsh, N. Tse
This paper introduces a relevance-derived metafunction capable of inducing smooth integration between Goals, Plans, plan Elements, Agents, and world Models. It provides the top level architecture of an intelligent system that utilises the metajkction, and describes a prototype: GEPAM (Goals. Elements, Plans, Agents, Models) , which implements some of its aspects. The advantages of capturing the notion of relevance in process oriented terms for the purpose of integrating an intelligent system’s sub-components are discussed and directions for future research and development are indicated.
{"title":"Relevance-Derived Metafunction: How to Interface Intelligent System's Sub-Components","authors":"B. Gorayska, R. Lindsay, K.R. Cox, J. Marsh, N. Tse","doi":"10.1109/AIHAS.1992.636864","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636864","url":null,"abstract":"This paper introduces a relevance-derived metafunction capable of inducing smooth integration between Goals, Plans, plan Elements, Agents, and world Models. It provides the top level architecture of an intelligent system that utilises the metajkction, and describes a prototype: GEPAM (Goals. Elements, Plans, Agents, Models) , which implements some of its aspects. The advantages of capturing the notion of relevance in process oriented terms for the purpose of integrating an intelligent system’s sub-components are discussed and directions for future research and development are indicated.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"498 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114118103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-08DOI: 10.1109/AIHAS.1992.636856
T. Lakshminarayana, H. Murty
This paper discusses the use of a layered feed forward neural network employing a generalized delta rule to model and predict the performance of an industrial scale Electrostatic Precipitator characterized by non-linearities. The precipitator has been operated under different input conditions and the opacity at its outlet measured. Using different combinations of these inputs, two models of a 3 layered neural network have been generated. The model and the precipitator responses have been obtained for the same set of input variables. These have been found to agree quite well. Large training set, input vector size and proper choice of input variables can improve the model accuracy.
{"title":"Use of Backpropagation Network for Modeling an Electrostatic Precipitator","authors":"T. Lakshminarayana, H. Murty","doi":"10.1109/AIHAS.1992.636856","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636856","url":null,"abstract":"This paper discusses the use of a layered feed forward neural network employing a generalized delta rule to model and predict the performance of an industrial scale Electrostatic Precipitator characterized by non-linearities. The precipitator has been operated under different input conditions and the opacity at its outlet measured. Using different combinations of these inputs, two models of a 3 layered neural network have been generated. The model and the precipitator responses have been obtained for the same set of input variables. These have been found to agree quite well. Large training set, input vector size and proper choice of input variables can improve the model accuracy.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"508 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123202362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}