Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086186
M. Bhagya, S. Tripathi, P. S. Thilagam
This paper presents a technique to optimize contour based template matching by using General Purpose computation on Graphics Processing Units (GPGPU). Contour based template matching requires edge detection and searching for presence of a template in an entire image, real time implementation of which is not trivial. Using the proposed solution, we could achieve an implementation fast enough to process a standard video (640 × 480) in real time with sufficient accuracy.
{"title":"Optimization of countour based template matching using GPGPU based hexagonal framework","authors":"M. Bhagya, S. Tripathi, P. S. Thilagam","doi":"10.1109/HIS.2014.7086186","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086186","url":null,"abstract":"This paper presents a technique to optimize contour based template matching by using General Purpose computation on Graphics Processing Units (GPGPU). Contour based template matching requires edge detection and searching for presence of a template in an entire image, real time implementation of which is not trivial. Using the proposed solution, we could achieve an implementation fast enough to process a standard video (640 × 480) in real time with sufficient accuracy.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129635418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086191
Hossam M. Zawbaa, M. Hazman, M. Abbass, A. Hassanien
The aim of this paper is to develop an effective classification approach based on Random Forest (RF) algorithm. Three fruits; i.e., apples, Strawberry, and oranges were analysed and several features were extracted based on the fruits' shape, colour characteristics as well as Scale Invariant Feature Transform (SIFT). A preprocessing stages using image processing to prepare the fruit images dataset to reduce their color index is presented. The fruit image features is then extracted. Finally, the fruit classification process is adopted using random forests (RF), which is a recently developed machine learning algorithm. A regular digital camera was used to acquire the images, and all manipulations were performed in a MATLAB environment. Experiments were tested and evaluated using a series of experiments with 178 fruit images. It shows that Random Forest (RF) based algorithm provides better accuracy compared to the other well know machine learning techniques such as K-Nearest Neighborhood (K-NN) and Support Vector Machine (SVM) algorithms. Moreover, the system is capable of automatically recognize the fruit name with a high degree of accuracy.
{"title":"Automatic fruit classification using random forest algorithm","authors":"Hossam M. Zawbaa, M. Hazman, M. Abbass, A. Hassanien","doi":"10.1109/HIS.2014.7086191","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086191","url":null,"abstract":"The aim of this paper is to develop an effective classification approach based on Random Forest (RF) algorithm. Three fruits; i.e., apples, Strawberry, and oranges were analysed and several features were extracted based on the fruits' shape, colour characteristics as well as Scale Invariant Feature Transform (SIFT). A preprocessing stages using image processing to prepare the fruit images dataset to reduce their color index is presented. The fruit image features is then extracted. Finally, the fruit classification process is adopted using random forests (RF), which is a recently developed machine learning algorithm. A regular digital camera was used to acquire the images, and all manipulations were performed in a MATLAB environment. Experiments were tested and evaluated using a series of experiments with 178 fruit images. It shows that Random Forest (RF) based algorithm provides better accuracy compared to the other well know machine learning techniques such as K-Nearest Neighborhood (K-NN) and Support Vector Machine (SVM) algorithms. Moreover, the system is capable of automatically recognize the fruit name with a high degree of accuracy.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123940185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086163
M. Fouad, Mahmood A. Mahmood, Hamdi A. Mahmoud, Adham Mohamed, A. Hassanien
The road surface condition information is very useful for the safety of road users and to inform road administrators for conducting appropriate maintenance. Roughness features of road surface; such as speed bumps and potholes, have bad effects on road users and their vehicles. Usually speed bumps are used to slow motor-vehicle traffic in specific areas in order to increase safety conditions. On the other hand driving over speed bumps at high speeds could cause accidents or be the reason for spinal injury. Therefore informing road users of the position of speed bumps through their journey on the road especially at night or when lighting is poor would be a valuable feature. This paper exploits a mobile sensor computing framework to monitor and assess road surface conditions. The framework measures the changes in the gravity orientation through a gyroscope and the shifts in the accelerometer's indications, both as an assessment for the existence of speed bumps. The proposed classification approach used the theory of rough mereology to rank the modified data in order to make a useful recommendation to road users.
{"title":"Intelligent road surface quality evaluation using rough mereology","authors":"M. Fouad, Mahmood A. Mahmood, Hamdi A. Mahmoud, Adham Mohamed, A. Hassanien","doi":"10.1109/HIS.2014.7086163","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086163","url":null,"abstract":"The road surface condition information is very useful for the safety of road users and to inform road administrators for conducting appropriate maintenance. Roughness features of road surface; such as speed bumps and potholes, have bad effects on road users and their vehicles. Usually speed bumps are used to slow motor-vehicle traffic in specific areas in order to increase safety conditions. On the other hand driving over speed bumps at high speeds could cause accidents or be the reason for spinal injury. Therefore informing road users of the position of speed bumps through their journey on the road especially at night or when lighting is poor would be a valuable feature. This paper exploits a mobile sensor computing framework to monitor and assess road surface conditions. The framework measures the changes in the gravity orientation through a gyroscope and the shifts in the accelerometer's indications, both as an assessment for the existence of speed bumps. The proposed classification approach used the theory of rough mereology to rank the modified data in order to make a useful recommendation to road users.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115037563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086179
Rafael Garcia Leonel Miani, Estevam Hruschka
In recent years, many researches have been focusing their studies in large growing knowledge bases. Most techniques focus on building algorithms to help the Knowledge Base (KB) automatically (or semi-automatically) extends. In this article, we make use of a generalized association rule mining algorithm in order, specially, to increase the relations between KB's categories. Although, association rules algorithms generates many rules and evaluate each one is a hard step. So, we also developed a structure, based on pruning obvious itemsets and generalized rules, which decreases the amount of discovered rules. The use of generalized association rules contributes to their reduction. Experiments confirm that our approach helps to increase the relationships between the KB's domains as well as facilitate the process of evaluating extracted rules.
{"title":"Analyzing the use of obvious and generalized association rules in a large knowledge base","authors":"Rafael Garcia Leonel Miani, Estevam Hruschka","doi":"10.1109/HIS.2014.7086179","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086179","url":null,"abstract":"In recent years, many researches have been focusing their studies in large growing knowledge bases. Most techniques focus on building algorithms to help the Knowledge Base (KB) automatically (or semi-automatically) extends. In this article, we make use of a generalized association rule mining algorithm in order, specially, to increase the relations between KB's categories. Although, association rules algorithms generates many rules and evaluate each one is a hard step. So, we also developed a structure, based on pruning obvious itemsets and generalized rules, which decreases the amount of discovered rules. The use of generalized association rules contributes to their reduction. Experiments confirm that our approach helps to increase the relationships between the KB's domains as well as facilitate the process of evaluating extracted rules.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126456514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086172
D. Tsarev, R. Kurynin, M. Petrovskiy, I. Mashechkin
In the paper we describe the NMF-based approach applied to the problem of determining an employee's access needs. The conducted research showed that the proposed NMF-based methods provide a useful analytical framework for processing and modeling employee's access needs data, and the obtained results demonstrate acceptable performance and provide descriptive representation model.
{"title":"Applying non-negative matrix factorization methods to discover user;s resource access patterns for computer security tasks","authors":"D. Tsarev, R. Kurynin, M. Petrovskiy, I. Mashechkin","doi":"10.1109/HIS.2014.7086172","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086172","url":null,"abstract":"In the paper we describe the NMF-based approach applied to the problem of determining an employee's access needs. The conducted research showed that the proposed NMF-based methods provide a useful analytical framework for processing and modeling employee's access needs data, and the obtained results demonstrate acceptable performance and provide descriptive representation model.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133530322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086185
Abdelhak Bousbaci, Nadjet Kamel
Clustering is partitioning data into groups, such that data in the same group are similar. Many clustering algorithms are proposed in the literature. K-means is the most used one because of its implementation simplicity and efficiency. Many clustering algorithms are based on the K-means algorithms aiming to improve execution time or clustering quality or both of them. Improving clustering quality can be done by an optimal selection of the initial centroids using for example meta-heuristics. Improving execution time can be performed using parallelism. In this paper, we propose a parallel hybrid K-means based on Google's MapReduce framework for the parallelism and the PSO meta-heuristics for the choice of the initial centroids. This algorithm is used to cluster multi-dimensional data sets. The results proved that using a network of machines to process data improves the execution time and the clustering quality.
{"title":"A parallel sampling-PSO-multi-core-K-means algorithm using mapreduce","authors":"Abdelhak Bousbaci, Nadjet Kamel","doi":"10.1109/HIS.2014.7086185","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086185","url":null,"abstract":"Clustering is partitioning data into groups, such that data in the same group are similar. Many clustering algorithms are proposed in the literature. K-means is the most used one because of its implementation simplicity and efficiency. Many clustering algorithms are based on the K-means algorithms aiming to improve execution time or clustering quality or both of them. Improving clustering quality can be done by an optimal selection of the initial centroids using for example meta-heuristics. Improving execution time can be performed using parallelism. In this paper, we propose a parallel hybrid K-means based on Google's MapReduce framework for the parallelism and the PSO meta-heuristics for the choice of the initial centroids. This algorithm is used to cluster multi-dimensional data sets. The results proved that using a network of machines to process data improves the execution time and the clustering quality.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133541326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086181
F. Marir, Huwida E. Said, U. AlAlami
One of the major benefits of text mining is that it provides individuals with an effective method for analyzing copious amounts of knowledge in the form of texts. Since the olden times, knowledge in medicine was established through recording and analyzing human experiences. This paper presents the first results of the use of text mining techniques to analyze online sources e.g. social networks, blogs, forums, medical literature, medical staff and patients' stories for discovering new knowledge and patterns related to diabetic disease covering diagnosis, diet, medicine, and activities. These finding are being gathered into an online knowledge repository for diabetic patients to access and better manage their diseases. In this research work, we found that the impacts of gaining informative and useful knowledge from a whole other range of data (text sources) besides the ones from medical literatures proved significant in detecting patterns in diabetic diseases that were considered to be insignificant before.
{"title":"Mining the web and medline medical records to discover new facts on diabetes","authors":"F. Marir, Huwida E. Said, U. AlAlami","doi":"10.1109/HIS.2014.7086181","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086181","url":null,"abstract":"One of the major benefits of text mining is that it provides individuals with an effective method for analyzing copious amounts of knowledge in the form of texts. Since the olden times, knowledge in medicine was established through recording and analyzing human experiences. This paper presents the first results of the use of text mining techniques to analyze online sources e.g. social networks, blogs, forums, medical literature, medical staff and patients' stories for discovering new knowledge and patterns related to diabetic disease covering diagnosis, diet, medicine, and activities. These finding are being gathered into an online knowledge repository for diabetic patients to access and better manage their diseases. In this research work, we found that the impacts of gaining informative and useful knowledge from a whole other range of data (text sources) besides the ones from medical literatures proved significant in detecting patterns in diabetic diseases that were considered to be insignificant before.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086169
M. Nadi, Nashwa El-Bendary, Hamdi A. Mahmoud, A. Hassanien
Falls represent a major cause of fatal injury, especially for the elderly, which accordingly create a serious obstruction for their independent living. Many efforts have been put towards providing a robust method to detect falls accurately and timely. This paper proposes an alerting system for detecting falls of the elderly people that monitors seniors via detecting the elderly faces and their bodies in order to generate an alert on falling detection. The proposed system consists of three phases that are pre-processing, feature extraction, and detecting phases. The integral image-based approach for multi-scale feature extraction developed to characterize the distinctive and robust patterns of different face poses. The histogram of oriented gradient (HOG) of extracted feature is then computed. The experiments were done on the datasets which consists of 191 recorded videos annotated human images with a large range of pose variations and backgrounds. The design of the fall detection system can increase the living time and reduce the rate of death due to the fall and shows the promising performance of the proposed system.
{"title":"Fall detection system of elderly people based on integral image and histogram of oriented gradient feature","authors":"M. Nadi, Nashwa El-Bendary, Hamdi A. Mahmoud, A. Hassanien","doi":"10.1109/HIS.2014.7086169","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086169","url":null,"abstract":"Falls represent a major cause of fatal injury, especially for the elderly, which accordingly create a serious obstruction for their independent living. Many efforts have been put towards providing a robust method to detect falls accurately and timely. This paper proposes an alerting system for detecting falls of the elderly people that monitors seniors via detecting the elderly faces and their bodies in order to generate an alert on falling detection. The proposed system consists of three phases that are pre-processing, feature extraction, and detecting phases. The integral image-based approach for multi-scale feature extraction developed to characterize the distinctive and robust patterns of different face poses. The histogram of oriented gradient (HOG) of extracted feature is then computed. The experiments were done on the datasets which consists of 191 recorded videos annotated human images with a large range of pose variations and backgrounds. The design of the fall detection system can increase the living time and reduce the rate of death due to the fall and shows the promising performance of the proposed system.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086210
P. Barchi, Estevam Hruschka
NELL (Never Ending Language Learning system) is the first system to practice the Never-Ending Machine Learning paradigm techniques. It has an inactive component to continually extend its KB: OntExt. Its main idea is to identify and add to the KB new relations which are frequently asserted in huge text data. Co-occurrence matrices are used to structure the normalized values of co-occurrence between the contexts for each category pair to identify those context patterns. The clustering of each matrix is done with Weka K-means algorithm: from each cluster, a new possible relation. This work present newOntExt: a new approach with new features to turn the ontology extension task feasible to NELL. This approach has also an alternative task of naming new relations found by another NELL component: Prophet. The relations are classified as valid or invalid by humans; the precision is calculated for each experiment and the results are compared to those relative to OntExt. Initial results show that ontology extension with newOntExt can help Never-Ending Learning systems to expand its volume of beliefs and to keep learning with high precision by acting in auto-supervision and auto-reflection.
NELL (Never Ending Language Learning system)是第一个实践永无止境的机器学习范式技术的系统。它有一个非活动组件来持续扩展其知识库:OntExt。其主要思想是识别海量文本数据中频繁出现的新关系,并将其添加到知识库中。共现矩阵用于构建每个类别对上下文之间共现的规范化值,以识别这些上下文模式。使用Weka K-means算法对每个矩阵进行聚类:从每个聚类中得到一个新的可能关系。本文提出了一种具有新特征的新方法newOntExt,使本体扩展任务在NELL中变得可行。这种方法还有另一项任务,即命名另一个NELL组件发现的新关系:Prophet。这些关系被人类划分为有效或无效;计算了每个实验的精度,并将结果与相对于OntExt的精度进行了比较。初步结果表明,基于newOntExt的本体扩展可以帮助永无止境的学习系统扩大其信念量,并通过自动监督和自动反射来保持高精度的学习。
{"title":"Never-ending ontology extension through machine reading","authors":"P. Barchi, Estevam Hruschka","doi":"10.1109/HIS.2014.7086210","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086210","url":null,"abstract":"NELL (Never Ending Language Learning system) is the first system to practice the Never-Ending Machine Learning paradigm techniques. It has an inactive component to continually extend its KB: OntExt. Its main idea is to identify and add to the KB new relations which are frequently asserted in huge text data. Co-occurrence matrices are used to structure the normalized values of co-occurrence between the contexts for each category pair to identify those context patterns. The clustering of each matrix is done with Weka K-means algorithm: from each cluster, a new possible relation. This work present newOntExt: a new approach with new features to turn the ontology extension task feasible to NELL. This approach has also an alternative task of naming new relations found by another NELL component: Prophet. The relations are classified as valid or invalid by humans; the precision is calculated for each experiment and the results are compared to those relative to OntExt. Initial results show that ontology extension with newOntExt can help Never-Ending Learning systems to expand its volume of beliefs and to keep learning with high precision by acting in auto-supervision and auto-reflection.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122632862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086196
Tiia Ikonen, Harri Niska, Billy Braithwaite, I. Pöllänen, Keijo Haataja, Pekka J. Toivanen, T. Tolonen, J. Isola
In this paper, we address the epidemiology and morphology questions of breast cancer with special focus on different cell features created by lesions. In addition, we provide an insight into feature extraction and classification schemes in the image analysis pipeline. Based on our conducted research work, a novel feature extraction approach, a modification of Distance Transform on Curved Space (DTOCS), is proposed for analysis and classification of breast cancer images. The first experimental results suggest that the Step-DTOCS-based MLP-network is capable of discriminating different cell structures in a respectable way. The obtained results are presented and analyzed, and further research ideas are discussed.
{"title":"Computer-assisted image analysis of histopathological breast cancer images using step-DTOCS","authors":"Tiia Ikonen, Harri Niska, Billy Braithwaite, I. Pöllänen, Keijo Haataja, Pekka J. Toivanen, T. Tolonen, J. Isola","doi":"10.1109/HIS.2014.7086196","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086196","url":null,"abstract":"In this paper, we address the epidemiology and morphology questions of breast cancer with special focus on different cell features created by lesions. In addition, we provide an insight into feature extraction and classification schemes in the image analysis pipeline. Based on our conducted research work, a novel feature extraction approach, a modification of Distance Transform on Curved Space (DTOCS), is proposed for analysis and classification of breast cancer images. The first experimental results suggest that the Step-DTOCS-based MLP-network is capable of discriminating different cell structures in a respectable way. The obtained results are presented and analyzed, and further research ideas are discussed.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129665474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}