Pub Date : 1999-10-01DOI: 10.1016/S0954-1810(99)00022-9
R.-S. Guh, F. Zorriassatine, J.D.T. Tannock, C. O'Brien
Unnatural patterns in the control charts can be associated with a specific set of assignable causes for process variation. Hence pattern recognition is very useful in identifying process problem. A common difficulty in existing control chart pattern recognition approaches is that of discrimination between different types of patterns which share similar features. This paper proposes an artificial neural network based model, which employs a pattern discrimination algorithm to recognise unnatural control chart patterns. The pattern discrimination algorithm is based on several special-purpose networks trained for specific recognition tasks. The performance of the proposed model was evaluated by simulation using two criteria: the percentage of correctly recognised patterns and the average run length (ARL). Numerical results show that the false recognition problem has been effectively addressed. In comparison with previous control chart approaches, the proposed model is capable of superior ARL performance while the type of the unnatural pattern can also be accurately identified.
{"title":"On-line control chart pattern detection and discrimination—a neural network approach","authors":"R.-S. Guh, F. Zorriassatine, J.D.T. Tannock, C. O'Brien","doi":"10.1016/S0954-1810(99)00022-9","DOIUrl":"10.1016/S0954-1810(99)00022-9","url":null,"abstract":"<div><p>Unnatural patterns in the control charts can be associated with a specific set of assignable causes for process variation. Hence pattern recognition is very useful in identifying process problem. A common difficulty in existing control chart pattern recognition approaches is that of discrimination between different types of patterns which share similar features. This paper proposes an artificial neural network based model, which employs a pattern discrimination algorithm to recognise unnatural control chart patterns. The pattern discrimination algorithm is based on several special-purpose networks trained for specific recognition tasks. The performance of the proposed model was evaluated by simulation using two criteria: the percentage of correctly recognised patterns and the average run length (ARL). Numerical results show that the false recognition problem has been effectively addressed. In comparison with previous control chart approaches, the proposed model is capable of superior ARL performance while the type of the unnatural pattern can also be accurately identified.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 4","pages":"Pages 413-425"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00022-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91081397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-01DOI: 10.1016/S0954-1810(99)00021-7
Liu Min, Wu Cheng
Identical parallel machine scheduling problem for minimizing the makespan is a very important production scheduling problem, but there have been many difficulties in the course of solving large scale identical parallel machine scheduling problem with too many jobs and machines. Genetic algorithms have shown great advantages in solving the combinatorial optimization problem in view of its characteristic that has high efficiency and that is fit for practical application. In this article, a kind of genetic algorithm based on machine code for minimizing the makespan in identical machine scheduling problem is presented. Several different scale numerical examples demonstrate the genetic algorithm proposed is efficient and fit for larger scale identical parallel machine scheduling problem for minimizing the makespan, the quality of its solution has advantage over heuristic procedure and simulated annealing method.
{"title":"A genetic algorithm for minimizing the makespan in the case of scheduling identical parallel machines","authors":"Liu Min, Wu Cheng","doi":"10.1016/S0954-1810(99)00021-7","DOIUrl":"10.1016/S0954-1810(99)00021-7","url":null,"abstract":"<div><p>Identical parallel machine scheduling problem for minimizing the makespan is a very important production scheduling problem, but there have been many difficulties in the course of solving large scale identical parallel machine scheduling problem with too many jobs and machines. Genetic algorithms have shown great advantages in solving the combinatorial optimization problem in view of its characteristic that has high efficiency and that is fit for practical application. In this article, a kind of genetic algorithm based on machine code for minimizing the makespan in identical machine scheduling problem is presented. Several different scale numerical examples demonstrate the genetic algorithm proposed is efficient and fit for larger scale identical parallel machine scheduling problem for minimizing the makespan, the quality of its solution has advantage over heuristic procedure and simulated annealing method.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 4","pages":"Pages 399-403"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00021-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79542204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-01DOI: 10.1016/S0954-1810(99)00010-2
J. Müller , H. Stahl
This paper describes a domain-limited system for speech understanding as well as for speech translation. An integrated semantic decoder directly converts the preprocessed speech signal into its semantic representation by a maximum a-posteriori classification. With the combination of probabilistic knowledge on acoustic, phonetic, syntactic, and semantic levels, the semantic decoder extracts the most probable meaning of the utterance. No separate speech recognition stage is needed because of the integration of the Viterbi-algorithm (calculating acoustic probabilities by the use of Hidden-Markov-Models) and a probabilistic chart parser (calculating semantic and syntactic probabilities by special models). The semantic structure is introduced as a representation of an utterance's meaning. It can be used as an intermediate level for a succeeding intention decoder (within a speech understanding system for the control of a running application by spoken inputs) as well as an interlingua-level for a succeeding language production unit (within an automatic speech translation system for the creation of spoken output in another language). Following the above principles and using the respective algorithms, speech understanding and speech translating front-ends for the domains ‘graphic editor’, ‘service robot’, ‘medical image visualisation’ and ‘scheduling dialogues’ could be successfully realised.
{"title":"Speech understanding and speech translation by maximum a-posteriori semantic decoding","authors":"J. Müller , H. Stahl","doi":"10.1016/S0954-1810(99)00010-2","DOIUrl":"https://doi.org/10.1016/S0954-1810(99)00010-2","url":null,"abstract":"<div><p>This paper describes a domain-limited system for speech understanding as well as for speech translation. An integrated semantic decoder directly converts the preprocessed speech signal into its semantic representation by a maximum a-posteriori classification. With the combination of probabilistic knowledge on acoustic, phonetic, syntactic, and semantic levels, the semantic decoder extracts the most probable meaning of the utterance. No separate speech recognition stage is needed because of the integration of the Viterbi-algorithm (calculating acoustic probabilities by the use of Hidden-Markov-Models) and a probabilistic chart parser (calculating semantic and syntactic probabilities by special models). The semantic structure is introduced as a representation of an utterance's meaning. It can be used as an intermediate level for a succeeding intention decoder (within a speech understanding system for the control of a running application by spoken inputs) as well as an interlingua-level for a succeeding language production unit (within an automatic speech translation system for the creation of spoken output in another language). Following the above principles and using the respective algorithms, speech understanding and speech translating front-ends for the domains ‘graphic editor’, ‘service robot’, ‘medical image visualisation’ and ‘scheduling dialogues’ could be successfully realised.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 4","pages":"Pages 373-384"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00010-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91720191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-01DOI: 10.1016/S0954-1810(99)00011-4
F. Güneş, H. Torpi, B.A. Çetiner
This work can be classified into three parts: The first part is a multidimensional signal–noise neural network model for a microwave small-signal transistor. Here the device is modeled by a black box, whose small signal and noise parameters are evaluated through a neural network, based upon the fitting of both these parameters for multiple bias and configuration with their target values. The second part is the computer simulation of the possible performance (F,Vi,Gtmax) triplets. In the final part, which is the combination of the first two parts, the performance curves are obtained using the relationships among operation conditions f, VCE, and ICE; the noise figure, input VSWR and maximum stable transducer gain.
{"title":"Neural network modeling of active devices for use in MMIC design","authors":"F. Güneş, H. Torpi, B.A. Çetiner","doi":"10.1016/S0954-1810(99)00011-4","DOIUrl":"https://doi.org/10.1016/S0954-1810(99)00011-4","url":null,"abstract":"<div><p>This work can be classified into three parts: The first part is a multidimensional signal–noise neural network model for a microwave small-signal transistor. Here the device is modeled by a black box, whose small signal and noise parameters are evaluated through a neural network, based upon the fitting of both these parameters for multiple bias and configuration with their target values. The second part is the computer simulation of the possible performance (<em>F</em>,<em>V</em><sub><em>i</em></sub>,<em>G</em><sub>tmax</sub>) triplets. In the final part, which is the combination of the first two parts, the performance curves are obtained using the relationships among operation conditions <em>f</em>, <em>V</em><sub>CE</sub>, and <em>I</em><sub>CE</sub>; the noise figure, input VSWR and maximum stable transducer gain.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 4","pages":"Pages 385-392"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00011-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91720740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-01DOI: 10.1016/S0954-1810(99)00009-6
J.M. Garrell i Guiu, E. Golobardes i Ribé, E. Bernadó i Mansilla, X. Llorà i Fàbrega
This article describes the application of Machine Learning (ML) techniques to a real world problem: the Automatic Diagnosis (classification) of Mammary Biopsy Images. The techniques applied are Genetic Algorithms (GA) and Case-Based Reasoning (CBR). The article compares our results with previous results obtained using Neural Networks (NN). The main goals are: to efficiently solve classification problems of such a type and to compare different alternatives for Machine Learning. The article also introduces the systems we developed for solving this kind of classification problems: Genetic Based Classifier System (GeB-CS) for a GA approach, and Case-Based Classifier System (CaB-CS) for a CBR approach.
{"title":"Automatic diagnosis with genetic algorithms and case-based reasoning","authors":"J.M. Garrell i Guiu, E. Golobardes i Ribé, E. Bernadó i Mansilla, X. Llorà i Fàbrega","doi":"10.1016/S0954-1810(99)00009-6","DOIUrl":"10.1016/S0954-1810(99)00009-6","url":null,"abstract":"<div><p>This article describes the application of Machine Learning (ML) techniques to a real world problem: the Automatic Diagnosis (classification) of Mammary Biopsy Images. The techniques applied are Genetic Algorithms (GA) and Case-Based Reasoning (CBR). The article compares our results with previous results obtained using Neural Networks (NN). The main goals are: to efficiently solve classification problems of such a type and to compare different alternatives for Machine Learning. The article also introduces the systems we developed for solving this kind of classification problems: Genetic Based Classifier System (GeB-CS) for a GA approach, and Case-Based Classifier System (CaB-CS) for a CBR approach.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 4","pages":"Pages 367-372"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00009-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88945760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-01DOI: 10.1016/S0954-1810(99)00002-3
O. Shai , K. Preiss
The discrete mathematical representations of graph theory, augmented by theorems of matroid theory, were found to have elements and structures isomorphic with those of many different engineering systems. The properties of the mathematical elements of those graphs and the relations between them are then equivalent to knowledge about the engineering system, and are hence termed “embedded knowledge”. The use of this embedded knowledge is illustrated by several examples: a structural truss, a gear wheel system, a mass-spring-dashpot system and a mechanism. Using various graph representations and the theorems and algorithms embedded within them, provides a fruitful source of representations which can form a basis upon which to extend formal theories of reformulation.
{"title":"Graph theory representations of engineering systems and their embedded knowledge","authors":"O. Shai , K. Preiss","doi":"10.1016/S0954-1810(99)00002-3","DOIUrl":"10.1016/S0954-1810(99)00002-3","url":null,"abstract":"<div><p>The discrete mathematical representations of graph theory, augmented by theorems of matroid theory, were found to have elements and structures isomorphic with those of many different engineering systems. The properties of the mathematical elements of those graphs and the relations between them are then equivalent to knowledge about the engineering system, and are hence termed “embedded knowledge”. The use of this embedded knowledge is illustrated by several examples: a structural truss, a gear wheel system, a mass-spring-dashpot system and a mechanism. Using various graph representations and the theorems and algorithms embedded within them, provides a fruitful source of representations which can form a basis upon which to extend formal theories of reformulation.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 3","pages":"Pages 273-285"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00002-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89408841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-01DOI: 10.1016/S0954-1810(98)00014-4
Yaoxue Zhang, Hua Chen
One of the most important issues in computer integrated manufacturing systems is job scheduling. Though many scheduling criteria for job scheduling have been proposed, most of them are impractical for application in the low-volume/high-variety manufacturing environment. This paper reports the development of a knowledge-based dynamic job-scheduling system in the low-volume/high-variety manufacturing environment. The system provides us with a practical facility for job scheduling which takes into account the influence of many factors such as machine setup times, cell changes, replacement machines and load balancing among machines. The system is based on a set of heuristic algorithms and intranet technology. It has been found that the knowledge-based paradigm and the intranet technology are very useful for complex scheduling problems in low-volume/high-variety manufacturing cases.
{"title":"A knowledge-based dynamic job-scheduling in low-volume/high-variety manufacturing","authors":"Yaoxue Zhang, Hua Chen","doi":"10.1016/S0954-1810(98)00014-4","DOIUrl":"10.1016/S0954-1810(98)00014-4","url":null,"abstract":"<div><p>One of the most important issues in computer integrated manufacturing systems is job scheduling. Though many scheduling criteria for job scheduling have been proposed, most of them are impractical for application in the low-volume/high-variety manufacturing environment. This paper reports the development of a knowledge-based dynamic job-scheduling system in the low-volume/high-variety manufacturing environment. The system provides us with a practical facility for job scheduling which takes into account the influence of many factors such as machine setup times, cell changes, replacement machines and load balancing among machines. The system is based on a set of heuristic algorithms and intranet technology. It has been found that the knowledge-based paradigm and the intranet technology are very useful for complex scheduling problems in low-volume/high-variety manufacturing cases.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 3","pages":"Pages 241-249"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(98)00014-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75155763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-01DOI: 10.1016/S0954-1810(98)00015-6
Y.-S. Yeun , K.-H. Lee , Y.-S. Yang
This paper is concerning the development of the hybrid system of neural networks and genetic programming (GP) trees for problem domains where a complete input space can be decomposed into several different subregions, and these are well represented in the form of oblique decision tree. The overall architecture of this system, called federated agents, consists of a facilitator, local agents, and boundary agents. Neural networks are used as local agents, each of which is expert at different subregions. GP trees serve as boundary agents. A boundary agent refers to the one that specializes at only the borders of subregions where discontinuities or a few different patterns may coexist. The facilitator is responsible for choosing the local agent that is suitable for given input data using the information obtained from oblique decision tree. However, there is a large possibility of selecting the invalid local agent as result of the incorrect prediction of decision tree, provided that input data is close enough to the boundaries. Such a situation can lead the federated agents to produce a higher prediction error than that of a single neural network trained over the whole input space. To deal with this, the approach taken in this paper is that the facilitator selects the boundary agent instead of the local agent when input data is closely located at certain border of subregions. In this way, even if decision tree yields an incorrect prediction, the performance of the system is less affected by it. The validity of our approach is examined by applying federated agents to the approximation of the function with discontinuities and the configuration of the midship section of bulk cargo ships.
{"title":"Function approximations by coupling neural networks and genetic programming trees with oblique decision trees","authors":"Y.-S. Yeun , K.-H. Lee , Y.-S. Yang","doi":"10.1016/S0954-1810(98)00015-6","DOIUrl":"10.1016/S0954-1810(98)00015-6","url":null,"abstract":"<div><p>This paper is concerning the development of the hybrid system of neural networks and genetic programming (GP) trees for problem domains where a complete input space can be decomposed into several different subregions, and these are well represented in the form of oblique decision tree. The overall architecture of this system, called federated agents, consists of a facilitator, local agents, and boundary agents. Neural networks are used as local agents, each of which is expert at different subregions. GP trees serve as boundary agents. A boundary agent refers to the one that specializes at only the borders of subregions where discontinuities or a few different patterns may coexist. The facilitator is responsible for choosing the local agent that is suitable for given input data using the information obtained from oblique decision tree. However, there is a large possibility of selecting the invalid local agent as result of the incorrect prediction of decision tree, provided that input data is close enough to the boundaries. Such a situation can lead the federated agents to produce a higher prediction error than that of a single neural network trained over the whole input space. To deal with this, the approach taken in this paper is that the facilitator selects the boundary agent instead of the local agent when input data is closely located at certain border of subregions. In this way, even if decision tree yields an incorrect prediction, the performance of the system is less affected by it. The validity of our approach is examined by applying federated agents to the approximation of the function with discontinuities and the configuration of the midship section of bulk cargo ships.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 3","pages":"Pages 223-239"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(98)00015-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79717555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-01DOI: 10.1016/S0954-1810(99)00003-5
D.T. Pham, S.J. Oh
This article investigates the approximation of the inverse dynamics of unknown plants using a new type of recurrent backpropagation neural network. The network has two input elements when modelling a single-output plant, one to receive the plant output and the other, an error input to compensate for modelling uncertainties. The network has feedback connections from its output, hidden, and input layers to its “state” layer and self-connections within the “state” layer. The essential point of the proposed approach is to make use of the direct inverse learning scheme to achieve simple and accurate inverse system identification even in the presence of noise. This approach can easily be extended to the area of on-line adaptive control which is briefly introduced. Simulation results are given to illustrate the usefulness of the method for the simpler case of controlling time-invariant plants.
{"title":"Identification of plant inverse dynamics using neural networks","authors":"D.T. Pham, S.J. Oh","doi":"10.1016/S0954-1810(99)00003-5","DOIUrl":"10.1016/S0954-1810(99)00003-5","url":null,"abstract":"<div><p>This article investigates the approximation of the inverse dynamics of unknown plants using a new type of recurrent backpropagation neural network. The network has two input elements when modelling a single-output plant, one to receive the plant output and the other, an error input to compensate for modelling uncertainties. The network has feedback connections from its output, hidden, and input layers to its “state” layer and self-connections within the “state” layer. The essential point of the proposed approach is to make use of the direct inverse learning scheme to achieve simple and accurate inverse system identification even in the presence of noise. This approach can easily be extended to the area of on-line adaptive control which is briefly introduced. Simulation results are given to illustrate the usefulness of the method for the simpler case of controlling time-invariant plants.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 3","pages":"Pages 309-320"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(99)00003-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80790040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-01DOI: 10.1016/S0954-1810(98)00021-1
Yoram Reich , S.V. Barai
The use of machine learning (ML), and in particular, artificial neural networks (ANN), in engineering applications has increased dramatically over the last years. However, by and large, the development of such applications or their report lack proper evaluation. Deficient evaluation practice was observed in the general neural networks community and again in engineering applications through a survey we conducted of articles published in AI in Engineering and elsewhere. This status hinders understanding and prevents progress. This article goal is to remedy this situation. First, several evaluation methods are discussed with their relative qualities. Second, these qualities are illustrated by using the methods to evaluate ANN performance in two engineering problems. Third, a systematic evaluation procedure for ML is discussed. This procedure will lead to better evaluation of studies, and consequently to improved research and practice in the area of ML in engineering applications.
过去几年,机器学习(ML),特别是人工神经网络(ANN)在工程应用中的应用急剧增加。然而,总的来说,这些应用程序的开发或它们的报告缺乏适当的评估。通过我们对发表在《AI in engineering》和其他地方的文章进行的调查,在一般神经网络社区和工程应用中观察到缺乏评估实践。这种状态阻碍了理解并阻碍了进步。本文的目标就是纠正这种情况。首先,讨论了几种评价方法及其相对优劣。其次,通过评价人工神经网络在两个工程问题中的性能来说明这些特性。第三,讨论了机器学习的系统评价方法。这一过程将导致更好的研究评估,从而提高机器学习在工程应用领域的研究和实践。
{"title":"Evaluating machine learning models for engineering problems","authors":"Yoram Reich , S.V. Barai","doi":"10.1016/S0954-1810(98)00021-1","DOIUrl":"10.1016/S0954-1810(98)00021-1","url":null,"abstract":"<div><p>The use of machine learning (ML), and in particular, artificial neural networks (ANN), in engineering applications has increased dramatically over the last years. However, by and large, the development of such applications or their report lack proper evaluation. Deficient evaluation practice was observed in the general neural networks community and again in engineering applications through a survey we conducted of articles published in AI in Engineering and elsewhere. This status hinders understanding and prevents progress. This article goal is to remedy this situation. First, several evaluation methods are discussed with their relative qualities. Second, these qualities are illustrated by using the methods to evaluate ANN performance in two engineering problems. Third, a systematic evaluation procedure for ML is discussed. This procedure will lead to better evaluation of studies, and consequently to improved research and practice in the area of ML in engineering applications.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":"13 3","pages":"Pages 257-272"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(98)00021-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91087187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}