Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779989
Mona Peyk Herfeh, A. Shahbahrami, Farshad Parhizkar Miandehi
When an earthquake happens, the image-based techniques are influential tools for detection and classification of damaged buildings. Obtaining precise and exhaustive information about the condition and state of damaged buildings after an earthquake is basis of disaster management. Today's using satellite imageries such Quickbird is becoming more significant data for disaster management. In this paper, a method for detecting and classifying of damaged buildings using satellite imageries and digital map is proposed. In this method after extracting buildings position from digital map, they are located in the pre-event and post-event images of Bam earthquake. After generating features, genetic algorithm applied for obtaining optimal features. For classification, Adaptive boosting is used and compared with neural networks. Experimental results show that total accuracy of adaptive boosting for detecting and classifying of collapsed buildings is about 84 percent.
{"title":"Detecting earthquake damage levels using adaptive boosting","authors":"Mona Peyk Herfeh, A. Shahbahrami, Farshad Parhizkar Miandehi","doi":"10.1109/IRANIANMVIP.2013.6779989","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779989","url":null,"abstract":"When an earthquake happens, the image-based techniques are influential tools for detection and classification of damaged buildings. Obtaining precise and exhaustive information about the condition and state of damaged buildings after an earthquake is basis of disaster management. Today's using satellite imageries such Quickbird is becoming more significant data for disaster management. In this paper, a method for detecting and classifying of damaged buildings using satellite imageries and digital map is proposed. In this method after extracting buildings position from digital map, they are located in the pre-event and post-event images of Bam earthquake. After generating features, genetic algorithm applied for obtaining optimal features. For classification, Adaptive boosting is used and compared with neural networks. Experimental results show that total accuracy of adaptive boosting for detecting and classifying of collapsed buildings is about 84 percent.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779990
H. Izadi, J. Sadri, Nosrat-Agha Mehran
Identification of minerals in petrographic thin sections using intelligent methods is very complex and challenging task which, mineralogists and computer scientists are faced with it. Textural features have very important role to identify minerals, and undoubtedly without using these features, recognition minerals in thin sections yield to many miss classification results. Thin sections have been studied applying plane-polarized and cross-polarized lights. In this paper, in order to extract textural features of minerals in thin section, co-occurrence matrix is used, and six features as Entropy, Homogeneity, Energy, Correlation and Maximum Probability are extracted from each image. Then, ANNs are used for identifying in complex situation and experimental results have shown that using textural features in mineral identification, significant improve classification result in petrographic thin sections.
{"title":"A new approach to apply texture features in minerals identification in petrographic thin sections using ANNs","authors":"H. Izadi, J. Sadri, Nosrat-Agha Mehran","doi":"10.1109/IRANIANMVIP.2013.6779990","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779990","url":null,"abstract":"Identification of minerals in petrographic thin sections using intelligent methods is very complex and challenging task which, mineralogists and computer scientists are faced with it. Textural features have very important role to identify minerals, and undoubtedly without using these features, recognition minerals in thin sections yield to many miss classification results. Thin sections have been studied applying plane-polarized and cross-polarized lights. In this paper, in order to extract textural features of minerals in thin section, co-occurrence matrix is used, and six features as Entropy, Homogeneity, Energy, Correlation and Maximum Probability are extracted from each image. Then, ANNs are used for identifying in complex situation and experimental results have shown that using textural features in mineral identification, significant improve classification result in petrographic thin sections.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121485961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779953
Z. Imani, A. Ahmadyfard, A. Zohrevand, Mohamad Alipour
In this paper we address the problem of recognizing Farsi handwritten words. We extract two types of features from vertical stripes on word images: chain-code of word boundary and distribution of foreground density across the image word. The extracted feature vectors are coded using self organizing vector quantization. The result codes are used for training the model of each word in the database. Each word is modeled using discrete hidden Markov models (HMM). In order to evaluate the performance of the proposed system we conducted an experiment using new prepared database FARSA. We tested the proposed method using 198 word classes in this database. The result of experiment in compare with the existing methods is very promising.
{"title":"Offline handwritten Farsi cursive text recognition using hidden Markov models","authors":"Z. Imani, A. Ahmadyfard, A. Zohrevand, Mohamad Alipour","doi":"10.1109/IRANIANMVIP.2013.6779953","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779953","url":null,"abstract":"In this paper we address the problem of recognizing Farsi handwritten words. We extract two types of features from vertical stripes on word images: chain-code of word boundary and distribution of foreground density across the image word. The extracted feature vectors are coded using self organizing vector quantization. The result codes are used for training the model of each word in the database. Each word is modeled using discrete hidden Markov models (HMM). In order to evaluate the performance of the proposed system we conducted an experiment using new prepared database FARSA. We tested the proposed method using 198 word classes in this database. The result of experiment in compare with the existing methods is very promising.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123550749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780026
Roza Afarin, S. Mozaffari
This paper presents a new method for image encryption using Genetic algorithm (GA). First, rows and columns of the input image are dislocated randomly. Then, the obtained image is divided into four equal sized sub-images. After selecting one of these sub-images accidentally, two pixels are chosen from it as GA initial population. Cross-over and mutation operations are applied on the binary values of the selected pixels. Then the image is reconstructed in the reverse manner. If entropy of the result image increases, the current sub-image is utilized for the next step. Otherwise, another sub-images is chosen randomly and the same process is applied. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method can be used effectively for image encryption.
{"title":"Image encryption using genetic algorithm","authors":"Roza Afarin, S. Mozaffari","doi":"10.1109/IRANIANMVIP.2013.6780026","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780026","url":null,"abstract":"This paper presents a new method for image encryption using Genetic algorithm (GA). First, rows and columns of the input image are dislocated randomly. Then, the obtained image is divided into four equal sized sub-images. After selecting one of these sub-images accidentally, two pixels are chosen from it as GA initial population. Cross-over and mutation operations are applied on the binary values of the selected pixels. Then the image is reconstructed in the reverse manner. If entropy of the result image increases, the current sub-image is utilized for the next step. Otherwise, another sub-images is chosen randomly and the same process is applied. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method can be used effectively for image encryption.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779967
Pouya Nazari, H. Pourghassem
This paper proposes a novel method to extract blood vessels in retinal images. We also present a new effective preprocessing to reduce the effect of non-uniformly illumination using red and green channels of these images. The vessels finally have been extracted using 2D Gabor filter bank followed by thresholding on grayscale and thresholding based on structural properties of labeled vessel candidates, to extract large and thin vessels. The proposed algorithm is evaluated on DRIVE database, which is publically available. The results show that presented algorithm achieved accuracy rate of 94.81% along with True Positive Fraction (TPF) of 71.12% and False Positive Fraction (FPF) of 2.84%.
{"title":"An automated vessel segmentation algorithm in retinal images using 2D Gabor wavelet","authors":"Pouya Nazari, H. Pourghassem","doi":"10.1109/IRANIANMVIP.2013.6779967","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779967","url":null,"abstract":"This paper proposes a novel method to extract blood vessels in retinal images. We also present a new effective preprocessing to reduce the effect of non-uniformly illumination using red and green channels of these images. The vessels finally have been extracted using 2D Gabor filter bank followed by thresholding on grayscale and thresholding based on structural properties of labeled vessel candidates, to extract large and thin vessels. The proposed algorithm is evaluated on DRIVE database, which is publically available. The results show that presented algorithm achieved accuracy rate of 94.81% along with True Positive Fraction (TPF) of 71.12% and False Positive Fraction (FPF) of 2.84%.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780005
Majid Mohrekesh, Shekoofeh Azizi, S. Samavi
The widespread usage of the contourlet-transform (CT) and today's real-time needs demand faster execution of CT. Solutions are available, but due to lack of portability or computational intensity, they are disadvantageous in real-time applications. In this paper we take advantage of modern GPUs for the acceleration purpose. GPU is well-suited to address data-parallel computation applications such as CT. The convolution part of CT, which is the most computational intensive step, is reshaped for parallel processing. Then the whole transform is transported into GPU to avoid multiple time consuming migrations between the host and device. Experimental results show that with existing GPUs, CT execution achieves more than 19x speedup as compared to its non-parallel CPU-based method. It takes approximately 40ms to compute the transform of a 512×512 image, which should be sufficient for real-time applications.
{"title":"Accelerating GPU implementation of contourlet transform","authors":"Majid Mohrekesh, Shekoofeh Azizi, S. Samavi","doi":"10.1109/IRANIANMVIP.2013.6780005","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780005","url":null,"abstract":"The widespread usage of the contourlet-transform (CT) and today's real-time needs demand faster execution of CT. Solutions are available, but due to lack of portability or computational intensity, they are disadvantageous in real-time applications. In this paper we take advantage of modern GPUs for the acceleration purpose. GPU is well-suited to address data-parallel computation applications such as CT. The convolution part of CT, which is the most computational intensive step, is reshaped for parallel processing. Then the whole transform is transported into GPU to avoid multiple time consuming migrations between the host and device. Experimental results show that with existing GPUs, CT execution achieves more than 19x speedup as compared to its non-parallel CPU-based method. It takes approximately 40ms to compute the transform of a 512×512 image, which should be sufficient for real-time applications.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"18 789 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779966
H. Fayyazi, H. Dehghani, M. Hosseini
Spectral unmixing is an active research area in remote sensing. The direct use of the spectral libraries in spectral unmixing is increased by increasing the availability of the libraries. In this way, the spectral unmixing problem is converted into a sparse regression problem that is time-consuming. This is due to the existence of irrelevant spectra in the library. So these spectra should be removed in some way. In this paper, a machine learning approach for spectral library pruning is introduced. At first, the spectral library is clustered based on a simple and efficient new feature space. Then the training data needed to learn a classifier are extracted by adding different noise levels to the clustered spectra. The label of the training data is determined based on the results of spectral library clustering. After learning the classifier, each pixel of the image is classified using it. For pruning the library, the spectra with the labels that none of the image pixels belong to, are removed. We use three classifiers, decision tree, neural networks and k-nearest neighbor to determine the effect of applying different classifiers. The results compared here show that the proposed method works well in noisy images.
{"title":"Spectral library pruning based on classification techniques","authors":"H. Fayyazi, H. Dehghani, M. Hosseini","doi":"10.1109/IRANIANMVIP.2013.6779966","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779966","url":null,"abstract":"Spectral unmixing is an active research area in remote sensing. The direct use of the spectral libraries in spectral unmixing is increased by increasing the availability of the libraries. In this way, the spectral unmixing problem is converted into a sparse regression problem that is time-consuming. This is due to the existence of irrelevant spectra in the library. So these spectra should be removed in some way. In this paper, a machine learning approach for spectral library pruning is introduced. At first, the spectral library is clustered based on a simple and efficient new feature space. Then the training data needed to learn a classifier are extracted by adding different noise levels to the clustered spectra. The label of the training data is determined based on the results of spectral library clustering. After learning the classifier, each pixel of the image is classified using it. For pruning the library, the spectra with the labels that none of the image pixels belong to, are removed. We use three classifiers, decision tree, neural networks and k-nearest neighbor to determine the effect of applying different classifiers. The results compared here show that the proposed method works well in noisy images.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131384404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779977
M. M. Gharasuie, Hadi Seyedarabi
The goal of interaction between human and computer is to find a way to treat it like human-human interaction. Gestures play an important role in human's daily life in order to transfer data and human emotions. The gestures are results of part of body movement in which hand movement is the most widely used one that is known as dynamic hand gesture. So it is very important to follow and recognize hand motion to provide multi-purpose use. In this paper, we propose a system that recognizes hand gestures from continuous hand motion for English numbers from 0 to 9 in real-time, based on Hidden Markov Models (HMMs). There are two kinds of gestures, key gestures and link gestures. The link gestures are used to separate the key gestures from other hand motion trajectories (gesture path) that are called spotting. This type of spotting is a heuristic-based method that identifies start and end points of the key gestures. Then gesture path between these two points are given to HMMs for classification. Experimental results show that the proposed system can successfully recognize the key gestures with recognition rate of 93.84%and work in complex situations very well.
{"title":"Real-time dynamic hand gesture recognition using hidden Markov models","authors":"M. M. Gharasuie, Hadi Seyedarabi","doi":"10.1109/IRANIANMVIP.2013.6779977","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779977","url":null,"abstract":"The goal of interaction between human and computer is to find a way to treat it like human-human interaction. Gestures play an important role in human's daily life in order to transfer data and human emotions. The gestures are results of part of body movement in which hand movement is the most widely used one that is known as dynamic hand gesture. So it is very important to follow and recognize hand motion to provide multi-purpose use. In this paper, we propose a system that recognizes hand gestures from continuous hand motion for English numbers from 0 to 9 in real-time, based on Hidden Markov Models (HMMs). There are two kinds of gestures, key gestures and link gestures. The link gestures are used to separate the key gestures from other hand motion trajectories (gesture path) that are called spotting. This type of spotting is a heuristic-based method that identifies start and end points of the key gestures. Then gesture path between these two points are given to HMMs for classification. Experimental results show that the proposed system can successfully recognize the key gestures with recognition rate of 93.84%and work in complex situations very well.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116350177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IranianMVIP.2013.6780006
Mohamad Fatahi, Mohsen Nadjafi, S.V. Al-Din Makki
This paper presents a skin segmentation method based on multiple classifier system strategy in order to improve the performance of classification especially in quasi-skin regions. Quasi-skin regions in digital images are non-skin patches which have characteristics like the human skin and are known as a basic origin of misclassification error in skin segmentation. To cope with this problem, we have designed an algorithmic architecture by combining four prominent classifiers to construct a synergy to conceal their weaknesses and amplify their strengths. Participant classifiers in our approach include cellular learning automaton, likelihood, Gaussian and Support Vector Machines in which decision making performs via a conditional voting step. The accuracy and specificity were employed to evaluate the performance. Experiments on a collected test-set database including 142 challenging images demonstrate that the proposed skin detector is able to improve the accuracy and specificity up to 1.92% and 0.83%, respectively, than the best of individual classifier.
{"title":"Improving the performance of skin segmentation in quasi-skin regions via multiple classifier system","authors":"Mohamad Fatahi, Mohsen Nadjafi, S.V. Al-Din Makki","doi":"10.1109/IranianMVIP.2013.6780006","DOIUrl":"https://doi.org/10.1109/IranianMVIP.2013.6780006","url":null,"abstract":"This paper presents a skin segmentation method based on multiple classifier system strategy in order to improve the performance of classification especially in quasi-skin regions. Quasi-skin regions in digital images are non-skin patches which have characteristics like the human skin and are known as a basic origin of misclassification error in skin segmentation. To cope with this problem, we have designed an algorithmic architecture by combining four prominent classifiers to construct a synergy to conceal their weaknesses and amplify their strengths. Participant classifiers in our approach include cellular learning automaton, likelihood, Gaussian and Support Vector Machines in which decision making performs via a conditional voting step. The accuracy and specificity were employed to evaluate the performance. Experiments on a collected test-set database including 142 challenging images demonstrate that the proposed skin detector is able to improve the accuracy and specificity up to 1.92% and 0.83%, respectively, than the best of individual classifier.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125437033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780031
Bahareh Shahangian, H. Pourghassem
Brain hemorrhage detection and classification is a major help to physicians to rescue patients in an early stage. In this paper, we have tried to introduce an automatic detection and classification method to improve and accelerate the process of physicians' decision-making. To achieve this purpose, at first we have used a simple and effective segmentation method to detect and separate the hemorrhage regions from other parts of the brain, and then we have extracted a number of features from each detected hemorrhage region. We selected some of convenient features by using a Genetic Algorithm (GA)-based feature selection algorithm. Eventually, we have classified the different types of hemorrhages. Our algorithm is evaluated on a perfect set of CT-scan images and the segmentation accuracy for three major types of hemorrhages (EDH, ICH and SDH) obtained 96.22%, 95.14% and 90.04%, respectively. In the classification step, multilayer neural network could be more successful than the KNN classifier because of its higher accuracy (93.3%). Finally, we achieved the accuracy rate of more than 90% for the detection and classification of brain hemorrhages.
{"title":"Automatic brain hemorrhage segmentation and classification in CT scan images","authors":"Bahareh Shahangian, H. Pourghassem","doi":"10.1109/IRANIANMVIP.2013.6780031","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780031","url":null,"abstract":"Brain hemorrhage detection and classification is a major help to physicians to rescue patients in an early stage. In this paper, we have tried to introduce an automatic detection and classification method to improve and accelerate the process of physicians' decision-making. To achieve this purpose, at first we have used a simple and effective segmentation method to detect and separate the hemorrhage regions from other parts of the brain, and then we have extracted a number of features from each detected hemorrhage region. We selected some of convenient features by using a Genetic Algorithm (GA)-based feature selection algorithm. Eventually, we have classified the different types of hemorrhages. Our algorithm is evaluated on a perfect set of CT-scan images and the segmentation accuracy for three major types of hemorrhages (EDH, ICH and SDH) obtained 96.22%, 95.14% and 90.04%, respectively. In the classification step, multilayer neural network could be more successful than the KNN classifier because of its higher accuracy (93.3%). Finally, we achieved the accuracy rate of more than 90% for the detection and classification of brain hemorrhages.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116894446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}