Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198960
R. Mayoral, G. Lera
We try to address the problem of a priori selection of the adequate size for NBLM neighbourhoods. The application of the concept of neural neighbourhood to the Levenberg-Marquardt optimization method led us to the development of the NBLM algorithm. When this algorithm is used, there can be neighbourhoods that, not only produce significant reductions in memory requirements, but that also achieve better time performance than that of the Levenberg-Marquardt method. However, as long as the problem of choosing an appropriate neighbourhood size is not solved, the NBLM algorithm will not be able to offer the best possible performance.
{"title":"Bounding NBLM neighbourhood's adequate sizes","authors":"R. Mayoral, G. Lera","doi":"10.1109/ICONIP.2002.1198960","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198960","url":null,"abstract":"We try to address the problem of a priori selection of the adequate size for NBLM neighbourhoods. The application of the concept of neural neighbourhood to the Levenberg-Marquardt optimization method led us to the development of the NBLM algorithm. When this algorithm is used, there can be neighbourhoods that, not only produce significant reductions in memory requirements, but that also achieve better time performance than that of the Levenberg-Marquardt method. However, as long as the problem of choosing an appropriate neighbourhood size is not solved, the NBLM algorithm will not be able to offer the best possible performance.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198989
S. Omkar, S. Suresh, T. Raghavendra, V. Mani
Fuzzy c-means (FCM) clustering is used to classify the acoustic emission (AE) signal to different sources of signals. FCM has the ability to discover the cluster among the data, even when the boundaries between the subgroup are overlapping, FCM based technique has an advantage over conventional statistical technique like maximum likelihood estimate, nearest neighbor classifier etc, because they are distribution free (i.e.) no knowledge is required about the distribution of data. AE test is carried out using pulse, pencil and spark signal source on the surface of solid steel block. Four parameters-event duration (E/sub d/), peak amplitude (P/sub a/), rise time (R/sub t/) and ring down count (R/sub d/) are measured using AET 5000 system. These data are used to train and validate the FCM based classification.
{"title":"Acoustic emission signal classification using fuzzy c-means clustering","authors":"S. Omkar, S. Suresh, T. Raghavendra, V. Mani","doi":"10.1109/ICONIP.2002.1198989","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198989","url":null,"abstract":"Fuzzy c-means (FCM) clustering is used to classify the acoustic emission (AE) signal to different sources of signals. FCM has the ability to discover the cluster among the data, even when the boundaries between the subgroup are overlapping, FCM based technique has an advantage over conventional statistical technique like maximum likelihood estimate, nearest neighbor classifier etc, because they are distribution free (i.e.) no knowledge is required about the distribution of data. AE test is carried out using pulse, pencil and spark signal source on the surface of solid steel block. Four parameters-event duration (E/sub d/), peak amplitude (P/sub a/), rise time (R/sub t/) and ring down count (R/sub d/) are measured using AET 5000 system. These data are used to train and validate the FCM based classification.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198995
L. Rutkowski, K. Cpałka
In the paper we study new neuro-fuzzy systems. They are called the OR-type fuzzy inference systems (NFIS). Based on the input-output data we learn not only parameters of membership functions but also a type of the systems and aggregating parameters. We propose the weighted T-norm and S-norm to neuro-fuzzy inference systems. Our approach introduces more flexibility to the structure and learning of neuro-fuzzy systems.
{"title":"Flexible weighted neuro-fuzzy systems","authors":"L. Rutkowski, K. Cpałka","doi":"10.1109/ICONIP.2002.1198995","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198995","url":null,"abstract":"In the paper we study new neuro-fuzzy systems. They are called the OR-type fuzzy inference systems (NFIS). Based on the input-output data we learn not only parameters of membership functions but also a type of the systems and aggregating parameters. We propose the weighted T-norm and S-norm to neuro-fuzzy inference systems. Our approach introduces more flexibility to the structure and learning of neuro-fuzzy systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123697059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198201
Sung-Bae Cho
Classifying the patterns of moving point lights attached on actor's bodies with self-organizing map often fails to get successful results with its original unsupervised learning algorithm. This paper exploits a structure-adaptive self-organizing map (SASOM) which adaptively updates the weights, structure and size of the map, resulting in remarkable improvement of pattern classification performance. We have compared the results with those of conventional pattern classifiers and human subjects. SASOM turns out to be the best classifier producing 97.1% of recognition rate on the 312 test data from 26 subjects.
{"title":"Structure-adaptive SOM to classify 3-dimensional point light actors' gender","authors":"Sung-Bae Cho","doi":"10.1109/ICONIP.2002.1198201","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198201","url":null,"abstract":"Classifying the patterns of moving point lights attached on actor's bodies with self-organizing map often fails to get successful results with its original unsupervised learning algorithm. This paper exploits a structure-adaptive self-organizing map (SASOM) which adaptively updates the weights, structure and size of the map, resulting in remarkable improvement of pattern classification performance. We have compared the results with those of conventional pattern classifiers and human subjects. SASOM turns out to be the best classifier producing 97.1% of recognition rate on the 312 test data from 26 subjects.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123950973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198977
R. Kil, Imhoi Koo
The paper suggests a new bound of estimating the confidence interval defined by the absolute value of difference between the true (or general) and empirical risks for the regression of real-valued functions. The theoretical bounds of confidence intervals can be derived in the sense of probably approximately correct (PAC) learning. However, these theoretical bounds are too overestimated and not well fitted to the empirical data. In this sense, a new bound of the confidence interval which can explain the behavior of learning machines more faithfully to the given samples, is suggested.
{"title":"Generalization bounds for the regression of real-valued functions","authors":"R. Kil, Imhoi Koo","doi":"10.1109/ICONIP.2002.1198977","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198977","url":null,"abstract":"The paper suggests a new bound of estimating the confidence interval defined by the absolute value of difference between the true (or general) and empirical risks for the regression of real-valued functions. The theoretical bounds of confidence intervals can be derived in the sense of probably approximately correct (PAC) learning. However, these theoretical bounds are too overestimated and not well fitted to the empirical data. In this sense, a new bound of the confidence interval which can explain the behavior of learning machines more faithfully to the given samples, is suggested.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124066120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198978
C. Ahn, R. S. Ramakrishna, In-Chan Choi, C. Kang
Presents a neural network based near-optimal routing algorithm. It employs a modified Hopfield neural network (MHNN) as a means to solve the shortest path problem. It also guarantees a speedy computation that is appropriate to multi-hop radio networks. The MHNN uses every piece of information that is available at the peripheral neurons in addition to the highly correlated information that is available at the local neuron. Consequently, every neuron converges speedily and optimally to a stable state. The convergence is faster than what is usually found in algorithms that employ conventional Hopfield neural networks. Computer simulations support the indicated claims. The results are relatively independent of network topology for almost all source-destination pairs.
{"title":"Neural network based near-optimal routing algorithm","authors":"C. Ahn, R. S. Ramakrishna, In-Chan Choi, C. Kang","doi":"10.1109/ICONIP.2002.1198978","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198978","url":null,"abstract":"Presents a neural network based near-optimal routing algorithm. It employs a modified Hopfield neural network (MHNN) as a means to solve the shortest path problem. It also guarantees a speedy computation that is appropriate to multi-hop radio networks. The MHNN uses every piece of information that is available at the peripheral neurons in addition to the highly correlated information that is available at the local neuron. Consequently, every neuron converges speedily and optimally to a stable state. The convergence is faster than what is usually found in algorithms that employ conventional Hopfield neural networks. Computer simulations support the indicated claims. The results are relatively independent of network topology for almost all source-destination pairs.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198192
W. Lee, C. Sekhar, K. Takeda, F. Itakura
In a moving car environment, speech data is collected using a close-talking microphone placed in the headset of driver and multiple distant microphones placed around the driver. We address the issues in estimating spectral features of speech data collected using the close-talking microphone from the spectral features of data recorded on the distant microphones. We study methods such as concatenation, averaging, linear regression and nonlinear regression for estimation. We consider support vector machines (SVMs) for nonlinear regression of multiple spectral coefficients. We compare the performance of SVMs and hidden Markov models (HMMs) in recognition of subword units of speech using the original spectral features and the estimated spectral features. A Japanese speech corpus consisting of recordings in a moving car environment is used for our studies on estimation of spectral features and recognition of subword units of speech. Results of our studies show that SVM based regression performs better compared to linear regression, and SVMs give a higher recognition accuracy compared to HMMs.
{"title":"Multiple regression using support vector machines for recognition of speech in a moving car environment","authors":"W. Lee, C. Sekhar, K. Takeda, F. Itakura","doi":"10.1109/ICONIP.2002.1198192","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198192","url":null,"abstract":"In a moving car environment, speech data is collected using a close-talking microphone placed in the headset of driver and multiple distant microphones placed around the driver. We address the issues in estimating spectral features of speech data collected using the close-talking microphone from the spectral features of data recorded on the distant microphones. We study methods such as concatenation, averaging, linear regression and nonlinear regression for estimation. We consider support vector machines (SVMs) for nonlinear regression of multiple spectral coefficients. We compare the performance of SVMs and hidden Markov models (HMMs) in recognition of subword units of speech using the original spectral features and the estimated spectral features. A Japanese speech corpus consisting of recordings in a moving car environment is used for our studies on estimation of spectral features and recognition of subword units of speech. Results of our studies show that SVM based regression performs better compared to linear regression, and SVMs give a higher recognition accuracy compared to HMMs.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128561960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1199016
S. Laine
The paper presents how to find the variables that best illustrate a problem of interest when visualizing with the self-organizing map (SOM). The user defines what is interesting by labeling data points, e.g. with alphabets. These labels assign the data points into clusters. An optimization algorithm looks for the set of variables that best separates the clusters. These variables reflect the knowledge the user applied when labeling the data points. The paper measures the separability, not in the variable space, but on a SOM trained into this space. The found variables contain interesting information, and are well suited for the SOM. The trained SOM can comprehensively visualize the problem of interest, which supports discussion and learning from data. The approach is illustrated using the case of the Hitura mine; and compared with a standard statistical visualization algorithm, the Fisher discriminant analysis.
{"title":"Selecting the variables that train a self-organizing map (SOM) which best separates predefined clusters","authors":"S. Laine","doi":"10.1109/ICONIP.2002.1199016","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1199016","url":null,"abstract":"The paper presents how to find the variables that best illustrate a problem of interest when visualizing with the self-organizing map (SOM). The user defines what is interesting by labeling data points, e.g. with alphabets. These labels assign the data points into clusters. An optimization algorithm looks for the set of variables that best separates the clusters. These variables reflect the knowledge the user applied when labeling the data points. The paper measures the separability, not in the variable space, but on a SOM trained into this space. The found variables contain interesting information, and are well suited for the SOM. The trained SOM can comprehensively visualize the problem of interest, which supports discussion and learning from data. The approach is illustrated using the case of the Hitura mine; and compared with a standard statistical visualization algorithm, the Fisher discriminant analysis.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128617639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198139
Y. Mitsukura, M. Fukumi, N. Akamatsu
We propose a new method to examine whether or not human faces are included in color images by using a lip detection neural network (LDNN) and a skin distinction neural network (SDNN). In conventional methods, if there exists the same color as the skin color in scenes, the domain which is accepted as not only the skin color but any other color can be searched. However, first, the lips are detected by LDNN in the proposed method. Next, SDNN is utilized to distinguish skin color from the other colors. The proposed method can obtain relatively high recognition accuracy, since it has the double recognition structure of LDNN and SDNN. Finally, in order to demonstrate the effectiveness of the proposed scheme, computer simulations were performed. First, 100 lip color, 100 skin color and 100 background pictures, which are changed into 10/spl times/10 pixels, are prepared for training. The validity was verified by testing images containing several faces.
{"title":"Face detection and emotional extraction system using double structure neural networks","authors":"Y. Mitsukura, M. Fukumi, N. Akamatsu","doi":"10.1109/ICONIP.2002.1198139","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198139","url":null,"abstract":"We propose a new method to examine whether or not human faces are included in color images by using a lip detection neural network (LDNN) and a skin distinction neural network (SDNN). In conventional methods, if there exists the same color as the skin color in scenes, the domain which is accepted as not only the skin color but any other color can be searched. However, first, the lips are detected by LDNN in the proposed method. Next, SDNN is utilized to distinguish skin color from the other colors. The proposed method can obtain relatively high recognition accuracy, since it has the double recognition structure of LDNN and SDNN. Finally, in order to demonstrate the effectiveness of the proposed scheme, computer simulations were performed. First, 100 lip color, 100 skin color and 100 background pictures, which are changed into 10/spl times/10 pixels, are prepared for training. The validity was verified by testing images containing several faces.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129584718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198159
M. Kotani, A. Sugiyama, S. Ozawa
We describe a method of combining a self-organizing map (SOM) and a kernel based clustering for analyzing and categorizing the gene expression data obtained from DNA microarray. The SOM is an unsupervised neural network learning algorithm and forms a mapping a high-dimensional data to a two-dimensional space. However, it is difficult to find clustering boundaries from results of the SOM. On the other hand, the kernel based clustering can partition the data nonlinearly. In order to understand the results of SOM easily, we apply the kernel based clustering to finding the clustering boundaries and show that the proposed method is effective for categorizing the gene expression data.
{"title":"Analysis of DNA microarray data using self-organizing map and kernel based clustering","authors":"M. Kotani, A. Sugiyama, S. Ozawa","doi":"10.1109/ICONIP.2002.1198159","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198159","url":null,"abstract":"We describe a method of combining a self-organizing map (SOM) and a kernel based clustering for analyzing and categorizing the gene expression data obtained from DNA microarray. The SOM is an unsupervised neural network learning algorithm and forms a mapping a high-dimensional data to a two-dimensional space. However, it is difficult to find clustering boundaries from results of the SOM. On the other hand, the kernel based clustering can partition the data nonlinearly. In order to understand the results of SOM easily, we apply the kernel based clustering to finding the clustering boundaries and show that the proposed method is effective for categorizing the gene expression data.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122353594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}