Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340589
D. Gomez-Melendez, K. Anaya, Sebastian Cortes, Sheila Hernandez, C. Isaza
The automatic monitoring applications in outdoors based on computer vision algorithms, are strongly influenced by variations in lighting due to changes in the weather conditions. In particular, shadows can produce undesirable effects on the image analysis causing poor results in tracking or object recognition applications. In addition, it is true that modern cameras have associated strategies for automatic compensation of brightness (white balance), but some elements continue appearing on the scene creating difficulties in the understanding process. Based on the above, we present in this paper a method and some partial experimental results for the normalization of the illumination component in image sequences captured by a fixed camera in outdoors. We used the concept of intrinsic image decomposition into two components: illumination and reflectance. The qualitative results demonstrate the potential of the method to attenuate the effects caused by changes in the illumination of a scene in outdoors, where the main source of light is the sun.
{"title":"Lighting compensation in outdoors","authors":"D. Gomez-Melendez, K. Anaya, Sebastian Cortes, Sheila Hernandez, C. Isaza","doi":"10.1109/STSIVA.2012.6340589","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340589","url":null,"abstract":"The automatic monitoring applications in outdoors based on computer vision algorithms, are strongly influenced by variations in lighting due to changes in the weather conditions. In particular, shadows can produce undesirable effects on the image analysis causing poor results in tracking or object recognition applications. In addition, it is true that modern cameras have associated strategies for automatic compensation of brightness (white balance), but some elements continue appearing on the scene creating difficulties in the understanding process. Based on the above, we present in this paper a method and some partial experimental results for the normalization of the illumination component in image sequences captured by a fixed camera in outdoors. We used the concept of intrinsic image decomposition into two components: illumination and reflectance. The qualitative results demonstrate the potential of the method to attenuate the effects caused by changes in the illumination of a scene in outdoors, where the main source of light is the sun.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125733926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340572
J. Pineda, X. Suarez, I. Aristizábal, J. E. Duque, A. Zuluaga, N. Aldana
Myocardial viability is a fundamental question in clinical decision making process and in the treatment of ischemic heart disease. Contrast enhanced Magnetic Resonance can distinguish between viable and necrotic myocardium in non-invasive manner and with excellent definition of endocardial and epicardial tissue, allowing to assess the extent of necrosis. The correct classification between pathological and healthy tissue is a fundamental process for the posterior quantification and diagnosis. Using image processing theory is possible to use automatic techniques for tissue classification; however it is difficult to choose which is better. In this paper we present a semiautomatic methodology that allows the quantification of myocardial viability in MR delayed enhancement. We evaluate the accuracy and concordance of different classification algorithms comparing the results with simulated data and with the classification of expert radiologists. It was not significant differences in the Fuzzy C-means and K-means results. The threshold classification method showed high sensibility but very low agreement. We concluded that either of the centroid-based algorithms, the Fuzzy C-means or the K-means are correct for the assessment of myocardial viability.
{"title":"Comparison of classification techniques for the assessment of myocardial viability by cardiac imaging with delayed MR enhancement","authors":"J. Pineda, X. Suarez, I. Aristizábal, J. E. Duque, A. Zuluaga, N. Aldana","doi":"10.1109/STSIVA.2012.6340572","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340572","url":null,"abstract":"Myocardial viability is a fundamental question in clinical decision making process and in the treatment of ischemic heart disease. Contrast enhanced Magnetic Resonance can distinguish between viable and necrotic myocardium in non-invasive manner and with excellent definition of endocardial and epicardial tissue, allowing to assess the extent of necrosis. The correct classification between pathological and healthy tissue is a fundamental process for the posterior quantification and diagnosis. Using image processing theory is possible to use automatic techniques for tissue classification; however it is difficult to choose which is better. In this paper we present a semiautomatic methodology that allows the quantification of myocardial viability in MR delayed enhancement. We evaluate the accuracy and concordance of different classification algorithms comparing the results with simulated data and with the classification of expert radiologists. It was not significant differences in the Fuzzy C-means and K-means results. The threshold classification method showed high sensibility but very low agreement. We concluded that either of the centroid-based algorithms, the Fuzzy C-means or the K-means are correct for the assessment of myocardial viability.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128894637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340581
M. Guzman, A. Restrepo
This paper presents an indicator of image quality. This index is based on contrast and edges, which are techniques used in measurements of atmospheric visibility during the day from fixed cameras. This indicator is applied to urban images with many details acquired for a single event but different sensor exposure conditions and at different times of the day sunny and cloudy day. Experimental results show that the index reaches a maximum value that is used to set the conditions for capture in terms of the exposure time and diaphragm aperture. The results also show that the maximum values of the index are in the lower half of the sensor response curve.
{"title":"The image quality in the measurement of atmospheric visibility from contrast indices and edges","authors":"M. Guzman, A. Restrepo","doi":"10.1109/STSIVA.2012.6340581","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340581","url":null,"abstract":"This paper presents an indicator of image quality. This index is based on contrast and edges, which are techniques used in measurements of atmospheric visibility during the day from fixed cameras. This indicator is applied to urban images with many details acquired for a single event but different sensor exposure conditions and at different times of the day sunny and cloudy day. Experimental results show that the index reaches a maximum value that is used to set the conditions for capture in terms of the exposure time and diaphragm aperture. The results also show that the maximum values of the index are in the lower half of the sensor response curve.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131745605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340593
J. C. Arroyave, E. E. Espinosa, G. Ricaurte
Category (2). The Localization of sound sources is one of the most important functions in the auditory system. In this paper we propose a model that simulates the localization process of sound sources in the human being; with a binaural treatment of audible signals captured by standard microphones and conventional PC sound card in a (reverberant) regular environment, we attempt to give a more clear understanding about the issues of locate the audible sources, and the concepts used in the digital processing systems, as cross correlation, Fourier analysis, among others. Besides, we will be able to create software tools that allows us to improve research and in the same time, it may clarify concepts in an interactive and accesible way. The three keys of audible location used in this project, are enough and even redundant in the determination about the orientation of the source in the azimuthal plane. Although the determination effects of distances, angular resolution and forward-backward distinction are weak compared with the real audible sensation, despite of the work with signals taken by the dummy head with and without pinna Perhaps there are other keys, ones that can play a essential part in this matters.
{"title":"Binaural analysis of a sound source","authors":"J. C. Arroyave, E. E. Espinosa, G. Ricaurte","doi":"10.1109/STSIVA.2012.6340593","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340593","url":null,"abstract":"Category (2). The Localization of sound sources is one of the most important functions in the auditory system. In this paper we propose a model that simulates the localization process of sound sources in the human being; with a binaural treatment of audible signals captured by standard microphones and conventional PC sound card in a (reverberant) regular environment, we attempt to give a more clear understanding about the issues of locate the audible sources, and the concepts used in the digital processing systems, as cross correlation, Fourier analysis, among others. Besides, we will be able to create software tools that allows us to improve research and in the same time, it may clarify concepts in an interactive and accesible way. The three keys of audible location used in this project, are enough and even redundant in the determination about the orientation of the source in the azimuthal plane. Although the determination effects of distances, angular resolution and forward-backward distinction are weak compared with the real audible sensation, despite of the work with signals taken by the dummy head with and without pinna Perhaps there are other keys, ones that can play a essential part in this matters.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132599431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340551
Camilo Ernesto Ardila Franco, José David López Hincapié, J. Espinosa
CATHEGORY 2: The reconstruction of neural activity acquired with MEG/EEG devices (magnetoencephalogram/electroencephalogram) consists on generating three dimensional images indicating the location of the sources of activity. The neural activity is commonly modeled as current dipoles distributed over the cortical surface, for guaranteeing a linear propagation model though the head until the sensors placed on the scalp. There are several solution approaches used for estimating neural activity, they are mainly differentiated in the a priori information included and their sensibility to high noise levels. A comparison between different static solution approaches commonly used in the literature (minimum norm, LORETA, sLORETA) is presented in this paper. Their performance has been evaluated in different noise conditions with and without regularization for reducing uncertainty, being the general cross validation the best fitted regularization. Then it has been tested the effect of the number of dipoles used in the forward modeling; models with 5124, 8196 and 20484 dipoles were compared giving similar estimation errors but importance differences in computational effort were observed.
{"title":"Neural activity reconstruction with MEG/EEG data considering noise regularization","authors":"Camilo Ernesto Ardila Franco, José David López Hincapié, J. Espinosa","doi":"10.1109/STSIVA.2012.6340551","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340551","url":null,"abstract":"CATHEGORY 2: The reconstruction of neural activity acquired with MEG/EEG devices (magnetoencephalogram/electroencephalogram) consists on generating three dimensional images indicating the location of the sources of activity. The neural activity is commonly modeled as current dipoles distributed over the cortical surface, for guaranteeing a linear propagation model though the head until the sensors placed on the scalp. There are several solution approaches used for estimating neural activity, they are mainly differentiated in the a priori information included and their sensibility to high noise levels. A comparison between different static solution approaches commonly used in the literature (minimum norm, LORETA, sLORETA) is presented in this paper. Their performance has been evaluated in different noise conditions with and without regularization for reducing uncertainty, being the general cross validation the best fitted regularization. Then it has been tested the effect of the number of dipoles used in the forward modeling; models with 5124, 8196 and 20484 dipoles were compared giving similar estimation errors but importance differences in computational effort were observed.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115924035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340585
C. C. Ceballes-Serrano, S. García-López, J. A. Jaramillo-Garzón, G. Castellanos-Domínguez
Learning from imbalanced data has taken great interest on machine learning community because it is often present on many practical applications and reliability of learning algorithms is affected. A dataset is imbalanced if there is a great difference between observations from each class. Classification methods that do not consider this phenomenon are prone to produce decision boundaries totally biased towards the majority class. Today, assembly methods like DataBoost-IM combine sampling strategies with Boosting, and oversampling methods. However, when the input data has much noise these algorithms tend to reduce their performances. This work present a new method to deal with imbalanced data called SwarmBoost that combines Bossting, oversampling, and sub sampling based in optimization criteria to select samples. The results show that SwarmBoost has a better performance than DataBoost-IM and Smote for several databases.
{"title":"A strategy for classifying imbalanced data sets based on particle swarm optimization","authors":"C. C. Ceballes-Serrano, S. García-López, J. A. Jaramillo-Garzón, G. Castellanos-Domínguez","doi":"10.1109/STSIVA.2012.6340585","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340585","url":null,"abstract":"Learning from imbalanced data has taken great interest on machine learning community because it is often present on many practical applications and reliability of learning algorithms is affected. A dataset is imbalanced if there is a great difference between observations from each class. Classification methods that do not consider this phenomenon are prone to produce decision boundaries totally biased towards the majority class. Today, assembly methods like DataBoost-IM combine sampling strategies with Boosting, and oversampling methods. However, when the input data has much noise these algorithms tend to reduce their performances. This work present a new method to deal with imbalanced data called SwarmBoost that combines Bossting, oversampling, and sub sampling based in optimization criteria to select samples. The results show that SwarmBoost has a better performance than DataBoost-IM and Smote for several databases.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130316362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340594
Y. Morales, L. Díaz, F. Vega, C. Torres, L. Mattos
It is well-known that the Hilbert transform (HLT) is useful for generating the analytic signal, and saving the bandwidth required in communication. However, it is known by less people that the HLT is used for edge detection. In this paper, we introduce the radiant Hilbert transform (RHLT), and illustrate how to use it for edge detection with advantage noise immunity, obtaining this form the image squeleton fingerprint. The implemented system the images are entered into a Digital Correlator that uses the Fourier transform to change the space of representation, facilitating, the correlation operation and authenticate the user stored in the data base.
{"title":"Radial Hilbert transform in the detect edges of fingerprinting and its application in digital correlation","authors":"Y. Morales, L. Díaz, F. Vega, C. Torres, L. Mattos","doi":"10.1109/STSIVA.2012.6340594","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340594","url":null,"abstract":"It is well-known that the Hilbert transform (HLT) is useful for generating the analytic signal, and saving the bandwidth required in communication. However, it is known by less people that the HLT is used for edge detection. In this paper, we introduce the radiant Hilbert transform (RHLT), and illustrate how to use it for edge detection with advantage noise immunity, obtaining this form the image squeleton fingerprint. The implemented system the images are entered into a Digital Correlator that uses the Fourier transform to change the space of representation, facilitating, the correlation operation and authenticate the user stored in the data base.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125376779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340561
A. E. Castro-Ospina, L. Duque-Muñoz, G. Castellanos-Domínguez
According to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), attention deficit hyperactivity disorder (ADHD) is characterized by generalized symptoms and distortion of the lack of attention, hyperactivity and impulsiveness. ADHD is one of the most common psychological problems in childhood, with a prevalence estimated between 5% and 7%. To diagnose the presence of ADHD different techniques are used, such as neuroimaging, neuropsychological tests and neurophysiological studies. One method of the neurophysiological research is the one that records the brain's electrical activity onto potentials generated in response of a specific stimuli, which can be auditory, somatosensory or visual, known as event-related potential (ERP) or so-called cognitive evoked potentials. It is proposed to find the incidence of low-frequency bands calculated from wavelets and empirical mode decomposition to determine whether exist significative differences in the behavior of ERP waves in ADHD patients and control patients for a correct diagnosis. To do so, a database of visual evoked potentials of children between 4 and 15 years old is available, composed of 148 ADHD patients and 123 control patients.
{"title":"P300 analysis based on time frequency decomposition methods for adhd discrimination in child population","authors":"A. E. Castro-Ospina, L. Duque-Muñoz, G. Castellanos-Domínguez","doi":"10.1109/STSIVA.2012.6340561","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340561","url":null,"abstract":"According to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), attention deficit hyperactivity disorder (ADHD) is characterized by generalized symptoms and distortion of the lack of attention, hyperactivity and impulsiveness. ADHD is one of the most common psychological problems in childhood, with a prevalence estimated between 5% and 7%. To diagnose the presence of ADHD different techniques are used, such as neuroimaging, neuropsychological tests and neurophysiological studies. One method of the neurophysiological research is the one that records the brain's electrical activity onto potentials generated in response of a specific stimuli, which can be auditory, somatosensory or visual, known as event-related potential (ERP) or so-called cognitive evoked potentials. It is proposed to find the incidence of low-frequency bands calculated from wavelets and empirical mode decomposition to determine whether exist significative differences in the behavior of ERP waves in ADHD patients and control patients for a correct diagnosis. To do so, a database of visual evoked potentials of children between 4 and 15 years old is available, composed of 148 ADHD patients and 123 control patients.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122166707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340573
Nicolas Laverde, F. Calderon
Measuring the speed of vehicles in a road is of great importance in the planning and regulation of traffic. This article shows a recent method of capture the video, which greatly reduces the computational complexity of an algorithm for estimating the average speed of a road. The basis of the processing technique used, consists in accumulating sections each video frame in a matrix, in which one dimension corresponds to a section accumulated in a video frame, usually a line “the space dimension” and the other dimension to each video frame “the timedimension”. The accumulation is done on vertical or horizontal lines and the resulting matrix can be seen as a new image. If an accumulation in done on the spatio-temporal video two lines spaced by a known distance, vehicle speed can be estimated calculating the difference of this on the time axis of the two resulting images. This document shows the results of applying common techniques in stereo matching to the problem of matching images resulting from the space-time accumulation, used for estimating the average speed of a road.
{"title":"Stereo matching in spatio-temporal accumulation for the estimation of vehicular mean speed","authors":"Nicolas Laverde, F. Calderon","doi":"10.1109/STSIVA.2012.6340573","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340573","url":null,"abstract":"Measuring the speed of vehicles in a road is of great importance in the planning and regulation of traffic. This article shows a recent method of capture the video, which greatly reduces the computational complexity of an algorithm for estimating the average speed of a road. The basis of the processing technique used, consists in accumulating sections each video frame in a matrix, in which one dimension corresponds to a section accumulated in a video frame, usually a line “the space dimension” and the other dimension to each video frame “the timedimension”. The accumulation is done on vertical or horizontal lines and the resulting matrix can be seen as a new image. If an accumulation in done on the spatio-temporal video two lines spaced by a known distance, vehicle speed can be estimated calculating the difference of this on the time axis of the two resulting images. This document shows the results of applying common techniques in stereo matching to the problem of matching images resulting from the space-time accumulation, used for estimating the average speed of a road.","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115862082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/STSIVA.2012.6340590
C. Castro-Hoyos, D. Peluffo, C. Castellanos
Spectral clustering has represented a good alternative in digital signal processing and pattern recognition; however a decision concerning the affinity functions among data is still an issue. In this work it is presented an extended version of a traditional multiclass spectral clustering method which employs prior information about the classified data into the affinity matrixes aiming to maintain the background relation that might be lost in the traditional manner, that is using a scaled exponential affinity matrix constrained by weighting the data according to some prior knowledge and via k-way normalized cuts clustering, results in a semi-supervised methodology of traditional spectral clustering. Test was performed over toy data classification and image segmentation and evaluated with and unsupervised performance measures (group coherence, fisher criteria and silhouette).
{"title":"Constrained affinity matrix for spectral clustering: A basic semi-supervised extension","authors":"C. Castro-Hoyos, D. Peluffo, C. Castellanos","doi":"10.1109/STSIVA.2012.6340590","DOIUrl":"https://doi.org/10.1109/STSIVA.2012.6340590","url":null,"abstract":"Spectral clustering has represented a good alternative in digital signal processing and pattern recognition; however a decision concerning the affinity functions among data is still an issue. In this work it is presented an extended version of a traditional multiclass spectral clustering method which employs prior information about the classified data into the affinity matrixes aiming to maintain the background relation that might be lost in the traditional manner, that is using a scaled exponential affinity matrix constrained by weighting the data according to some prior knowledge and via k-way normalized cuts clustering, results in a semi-supervised methodology of traditional spectral clustering. Test was performed over toy data classification and image segmentation and evaluated with and unsupervised performance measures (group coherence, fisher criteria and silhouette).","PeriodicalId":383297,"journal":{"name":"2012 XVII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA)","volume":"371 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113989587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}