Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887772
S. Vani, G. Suresh
EEG recording are used to analyze the electrical signals generated by the brain. It is used in diagnosing and monitoring process of neurological disorder such as Epilepsy. Epilepsy cannot be controlled by available medical treatments. Its major manifestation is epilepsy seizure. Lifting Based Discrete Wavelet Transform (LBDWT) an efficient toll for representing electroencephalogram signals. EEG changes will be classified by Multilayer perceptron Neural Network (MLPNN). The classification rules were extracted from EEG that were reordered from healthy volunteers, epilepsy patients during seizure free interval and epilepsy patients during epileptic seizure. EEG signals were used as input of the MLPNNs trained with Back propagation and Levenberg - Marquadrant algorithm. Decision making was done in two stages: feature extraction by using LBDWT and classification using MLPNNs trained with the BP and LM algorithms. In this paper, we present an algorithm for classification of EEG (normal and Epilepsy) signals based on lifting based Discrete Wavelet Transformation and patterns recognize techniques.
{"title":"Performance analysis of lifting based DWT and MLPNN for epilepsy seizure from EEG","authors":"S. Vani, G. Suresh","doi":"10.1109/ICHCI-IEEE.2013.6887772","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887772","url":null,"abstract":"EEG recording are used to analyze the electrical signals generated by the brain. It is used in diagnosing and monitoring process of neurological disorder such as Epilepsy. Epilepsy cannot be controlled by available medical treatments. Its major manifestation is epilepsy seizure. Lifting Based Discrete Wavelet Transform (LBDWT) an efficient toll for representing electroencephalogram signals. EEG changes will be classified by Multilayer perceptron Neural Network (MLPNN). The classification rules were extracted from EEG that were reordered from healthy volunteers, epilepsy patients during seizure free interval and epilepsy patients during epileptic seizure. EEG signals were used as input of the MLPNNs trained with Back propagation and Levenberg - Marquadrant algorithm. Decision making was done in two stages: feature extraction by using LBDWT and classification using MLPNNs trained with the BP and LM algorithms. In this paper, we present an algorithm for classification of EEG (normal and Epilepsy) signals based on lifting based Discrete Wavelet Transformation and patterns recognize techniques.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128128073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887796
J. Archana, Senthil Raja Chermapandan, S. Palanivel
For a widespread usage of the software product, the product should be available and usable in different local languages of different countries. Many companies are planning to or have already invested time and money to internationalize their products and websites and localize them into other languages. When making a product to support multiple languages, there are chances for assumed conventions for a specific language. To ensure that the product is usable anywhere in the world while adapting to the cultural identity of the user we provide an automation framework to check the internationalization functionality of the product. The implementation of our proposed automation framework shows that it can identify the hardcoded content of a specific language, the text that has been over translated for a specific language and the character handling issues for various languages. The framework is flexible as it can be used for any web-based software product. The proposed framework will reduce the time involved in regression testing and with minimal effort the future enhancements of the product can be easily tested.
{"title":"Automation framework for localizability testing of internationalized software","authors":"J. Archana, Senthil Raja Chermapandan, S. Palanivel","doi":"10.1109/ICHCI-IEEE.2013.6887796","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887796","url":null,"abstract":"For a widespread usage of the software product, the product should be available and usable in different local languages of different countries. Many companies are planning to or have already invested time and money to internationalize their products and websites and localize them into other languages. When making a product to support multiple languages, there are chances for assumed conventions for a specific language. To ensure that the product is usable anywhere in the world while adapting to the cultural identity of the user we provide an automation framework to check the internationalization functionality of the product. The implementation of our proposed automation framework shows that it can identify the hardcoded content of a specific language, the text that has been over translated for a specific language and the character handling issues for various languages. The framework is flexible as it can be used for any web-based software product. The proposed framework will reduce the time involved in regression testing and with minimal effort the future enhancements of the product can be easily tested.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887816
K. Rangasamy, L. Sivakumar
This work aims to retune the controller parameters of decentralised PI controller of Alstom gasifier using Firefly algorithm. Coal gasifier is a highly nonlinear, multivariable process involving many complex reactions, and it seems to be difficult to control. The performance of the existing decentralised PI controller of Alstom gasifier does not able to satisfy the requirements at 0% load condition for sinusoidal pressure disturbance except which it seems to be good enough to meet the constraints at 50% and 100% load conditions. This is due to improper tuning of decentralised PI controllers and here we attempt to retune the PI controller responsible for pressure loop, using Firefly Algorithm (FA). Existing parameters of PI controller for pressure loop is replaced by the retuned parameters and pressure disturbance test (both sinusoidal and step), load change test and coal quality test are conducted and compared with the existing results. Results shows that the process satisfies the performance requirements at 0%, 50% and 100% load conditions without violating the constraints.
{"title":"Partial-retuning of decentralised PI controller of nonlinear multivariable process using firefly algorithm","authors":"K. Rangasamy, L. Sivakumar","doi":"10.1109/ICHCI-IEEE.2013.6887816","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887816","url":null,"abstract":"This work aims to retune the controller parameters of decentralised PI controller of Alstom gasifier using Firefly algorithm. Coal gasifier is a highly nonlinear, multivariable process involving many complex reactions, and it seems to be difficult to control. The performance of the existing decentralised PI controller of Alstom gasifier does not able to satisfy the requirements at 0% load condition for sinusoidal pressure disturbance except which it seems to be good enough to meet the constraints at 50% and 100% load conditions. This is due to improper tuning of decentralised PI controllers and here we attempt to retune the PI controller responsible for pressure loop, using Firefly Algorithm (FA). Existing parameters of PI controller for pressure loop is replaced by the retuned parameters and pressure disturbance test (both sinusoidal and step), load change test and coal quality test are conducted and compared with the existing results. Results shows that the process satisfies the performance requirements at 0%, 50% and 100% load conditions without violating the constraints.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114255925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887768
Chandranath Adak
This paper introduces an efficient edge detection method based on Gabor filter and rough clustering. The input image is smoothed by Gabor function, and the concept of rough clustering is used to focus on edge detection with soft computational approach. Hysteresis thresholding is used to get the actual output, i.e. edges of the input image. To show the effectiveness, the proposed technique is compared with some other edge detection methods.
{"title":"Gabor filter and rough clustering based edge detection","authors":"Chandranath Adak","doi":"10.1109/ICHCI-IEEE.2013.6887768","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887768","url":null,"abstract":"This paper introduces an efficient edge detection method based on Gabor filter and rough clustering. The input image is smoothed by Gabor function, and the concept of rough clustering is used to focus on edge detection with soft computational approach. Hysteresis thresholding is used to get the actual output, i.e. edges of the input image. To show the effectiveness, the proposed technique is compared with some other edge detection methods.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887814
M. Salagar, P. Kulkarni, S. Gondane
ASL (American Sign Language) is the primary language of many who are deaf. ASL is a complex language that employs signs made by moving the hands combined with facial expressions and postures of the body expression to convey linguistic information. Designed system for sign language recognizer works for gestures in ASL. Kinect is used as image capture device and fits the low-cost requirement as well. Human skeleton data of the joints of a user captured by the Kinect are analyzed. Video is runtime processed for signs. If gesture is predefined in the library, it is transcribed to word or phrase, and output is presented as voice and text. The implemented system works with excellent accuracy. After parallel implementation for system it achieves 95.6% in accuracy. This recognizer can be used as tutor for those who want to learn Sign language as well as translator for Deaf people so that they can communicate efficiently with everyone.
{"title":"Implementation of dynamic time warping for gesture recognition in sign language using high performance computing","authors":"M. Salagar, P. Kulkarni, S. Gondane","doi":"10.1109/ICHCI-IEEE.2013.6887814","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887814","url":null,"abstract":"ASL (American Sign Language) is the primary language of many who are deaf. ASL is a complex language that employs signs made by moving the hands combined with facial expressions and postures of the body expression to convey linguistic information. Designed system for sign language recognizer works for gestures in ASL. Kinect is used as image capture device and fits the low-cost requirement as well. Human skeleton data of the joints of a user captured by the Kinect are analyzed. Video is runtime processed for signs. If gesture is predefined in the library, it is transcribed to word or phrase, and output is presented as voice and text. The implemented system works with excellent accuracy. After parallel implementation for system it achieves 95.6% in accuracy. This recognizer can be used as tutor for those who want to learn Sign language as well as translator for Deaf people so that they can communicate efficiently with everyone.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115548800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887797
B. Kotiyal, Ankit Kumar, B. Pant, R. Goudar
The unremitting increase of computational strength has produced tremendous flow of data in the past two decades. This tremendous flow of data is known as “big data”. Big data is the data which cannot be processed with the aid of existing tools or techniques and if processed can result in interesting information's such as analysing the behaviour of the user, business intelligence etc. This paper discusses the difference between the traditional relational database and big data; it also shows the characteristics of big data. The paper also focuses on the distinct big data channels processes along with the various challenges and as well as on how big data is a solution to the organizations. Big data does not only focus to store and handle the large volume of data but also to analysed and extract the correct information from the data in lesser time span. At last it discusses about hadoop an open source framework that allows the distributed processing for massive datasets on cluster of computers which is shown with using the log file for extraction of information based on user query.
{"title":"Big data: Mining of log file through hadoop","authors":"B. Kotiyal, Ankit Kumar, B. Pant, R. Goudar","doi":"10.1109/ICHCI-IEEE.2013.6887797","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887797","url":null,"abstract":"The unremitting increase of computational strength has produced tremendous flow of data in the past two decades. This tremendous flow of data is known as “big data”. Big data is the data which cannot be processed with the aid of existing tools or techniques and if processed can result in interesting information's such as analysing the behaviour of the user, business intelligence etc. This paper discusses the difference between the traditional relational database and big data; it also shows the characteristics of big data. The paper also focuses on the distinct big data channels processes along with the various challenges and as well as on how big data is a solution to the organizations. Big data does not only focus to store and handle the large volume of data but also to analysed and extract the correct information from the data in lesser time span. At last it discusses about hadoop an open source framework that allows the distributed processing for massive datasets on cluster of computers which is shown with using the log file for extraction of information based on user query.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133692301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887784
A. Chowdhury, S. Karmakar, Swathi M. Reddy, Subrata Ghosh, D. Chakrabarti
Other than visual information, perceptions through tactile senses play a very important role in perceiving product quality and thus in choice making for product purchase by consumers. Presently, it is felt by marketing research people that there is always a need of satisfying tactile perception of the targeted buyers for increasing sale. Since it is still not feasible to have direct perception of tactile information of product in e-retailing environment, there is need of alternative means of information which can satisfy tactile need of e-buyer. E-retailers' main target is to impress their consumer pool by providing those information which are searched by consumers during on shelves buying. To achieve this goal, aim of the present paper is to investigate the effectiveness of product personality rating style as an alternative mean of satisfying tactile need of consumers during product review process; instead of written product reviews commonly found on e-retailers' websites. Observations of the current research clearly indicated that product personality ratings are capable of representing tactile experiences of products in complementary way and these features satisfied tactile need of consumers to some extent. Therefore, it is advised that e-retailers might use product personality ratings to satisfy consumers' tactile need during product review process and to merchandise their products on their web-sites.
{"title":"Product personality rating style for satisfaction of tactile need of online buyers — A human factors issue in the context of e-retailers' web-design","authors":"A. Chowdhury, S. Karmakar, Swathi M. Reddy, Subrata Ghosh, D. Chakrabarti","doi":"10.1109/ICHCI-IEEE.2013.6887784","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887784","url":null,"abstract":"Other than visual information, perceptions through tactile senses play a very important role in perceiving product quality and thus in choice making for product purchase by consumers. Presently, it is felt by marketing research people that there is always a need of satisfying tactile perception of the targeted buyers for increasing sale. Since it is still not feasible to have direct perception of tactile information of product in e-retailing environment, there is need of alternative means of information which can satisfy tactile need of e-buyer. E-retailers' main target is to impress their consumer pool by providing those information which are searched by consumers during on shelves buying. To achieve this goal, aim of the present paper is to investigate the effectiveness of product personality rating style as an alternative mean of satisfying tactile need of consumers during product review process; instead of written product reviews commonly found on e-retailers' websites. Observations of the current research clearly indicated that product personality ratings are capable of representing tactile experiences of products in complementary way and these features satisfied tactile need of consumers to some extent. Therefore, it is advised that e-retailers might use product personality ratings to satisfy consumers' tactile need during product review process and to merchandise their products on their web-sites.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"81 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133124899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive Voice Response (IVR) technology makes customer service 24 × 7 and cost effective and has been used by most customer facing enterprises. While effective, IVRs requires inputs to be keyed in using a touch tone phone, constraining the type of information that can be input. For this reason customers in general choose to speak with a human agent directly, negating the cost effectiveness of deploying an IVR. The need to speak to a human agent is more profound in a multi linguistic country like India, where in addition the agent has to necessarily speak in the language of the customer, making it unscalable. Speech technology based IVR solutions not only have all the benefits of a conventional IVR system but in addition address the scalability issue when configured to work in multiple languages. In order to enable a usable Speech enabled IVR into a complete and robust solution, several aspects need to be handled to sustain the core objective of serving the customer accurately. In this paper, we use, Speech Enabled Railway Enquiry System (SERES) which has been designed to provide railway information keeping in mind the Indian scenario, as a case study to identify issues that need to be addressed to enable a usable speech based IVR solution. Aspects looked at are a) Interaction design for customer experience b) field trial strategy c) System performance monitoring and evaluation and d) Infrastructure sizing to cater to the expected call volumes.
{"title":"Deploying usable speech enabled IVR systems for mass use","authors":"Chitralekha Bhat, B. Mithun, Vikram Saxena, Vrushali Kulkarni, Sunil Kumar Kopparapu","doi":"10.1109/ICHCI-IEEE.2013.6887794","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887794","url":null,"abstract":"Interactive Voice Response (IVR) technology makes customer service 24 × 7 and cost effective and has been used by most customer facing enterprises. While effective, IVRs requires inputs to be keyed in using a touch tone phone, constraining the type of information that can be input. For this reason customers in general choose to speak with a human agent directly, negating the cost effectiveness of deploying an IVR. The need to speak to a human agent is more profound in a multi linguistic country like India, where in addition the agent has to necessarily speak in the language of the customer, making it unscalable. Speech technology based IVR solutions not only have all the benefits of a conventional IVR system but in addition address the scalability issue when configured to work in multiple languages. In order to enable a usable Speech enabled IVR into a complete and robust solution, several aspects need to be handled to sustain the core objective of serving the customer accurately. In this paper, we use, Speech Enabled Railway Enquiry System (SERES) which has been designed to provide railway information keeping in mind the Indian scenario, as a case study to identify issues that need to be addressed to enable a usable speech based IVR solution. Aspects looked at are a) Interaction design for customer experience b) field trial strategy c) System performance monitoring and evaluation and d) Infrastructure sizing to cater to the expected call volumes.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133542507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887811
V. Shankar, C. V. Guru Rao
In this paper, we answers an iceberg query with minimum execution time by devising a new specialized index position algorithm. The iceberg queries are mainly intended to compute small outputs from large databases and or data warehouses provided on the user thresholds. The aggregate values are useful in computing knowledge which is delightful in taking part of the important decisions by an industry people such as knowledge workers, managers, and analysts in the field of decision support, information retrieval and knowledge discovery systems. The basic bitmap index technique offers a long execution time to compute the iceberg queries since it requires conducting of bitwise-AND operations between all pairs of bitmaps. Further, this execution time increases when the cardinality of an attribute increases. Therefore to quickly compute the iceberg queries, algorithm fetches the index positions of all 1bit from each bitmap vector in the bitmap table. Further, these indexed positions are processed to determine the common positions of the 1 bit between pair of bitmaps which answer as an iceberg query with minimum execution time. Exhaustive experimentation demonstrates our approach is much more efficient than existing strategy.
{"title":"Computing iceberg queries efficiently using bitmap index positions","authors":"V. Shankar, C. V. Guru Rao","doi":"10.1109/ICHCI-IEEE.2013.6887811","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887811","url":null,"abstract":"In this paper, we answers an iceberg query with minimum execution time by devising a new specialized index position algorithm. The iceberg queries are mainly intended to compute small outputs from large databases and or data warehouses provided on the user thresholds. The aggregate values are useful in computing knowledge which is delightful in taking part of the important decisions by an industry people such as knowledge workers, managers, and analysts in the field of decision support, information retrieval and knowledge discovery systems. The basic bitmap index technique offers a long execution time to compute the iceberg queries since it requires conducting of bitwise-AND operations between all pairs of bitmaps. Further, this execution time increases when the cardinality of an attribute increases. Therefore to quickly compute the iceberg queries, algorithm fetches the index positions of all 1bit from each bitmap vector in the bitmap table. Further, these indexed positions are processed to determine the common positions of the 1 bit between pair of bitmaps which answer as an iceberg query with minimum execution time. Exhaustive experimentation demonstrates our approach is much more efficient than existing strategy.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121832758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/ICHCI-IEEE.2013.6887782
Chandranath Adak
This paper represents an text extraction method from Google maps, GIS maps/images. Due to an unsupervised approach there is no requirement of any prior knowledge or training set about the textual and non-textual parts. Fuzzy C-Means clustering technique is used for image segmentation and Prewitt method is used to detect the edges. Connected component analysis and gridding technique enhance the correctness of the results. The proposed method reaches 98.5% accuracy level on the basis of experimental data sets.
{"title":"Unsupervised text extraction from G-maps","authors":"Chandranath Adak","doi":"10.1109/ICHCI-IEEE.2013.6887782","DOIUrl":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887782","url":null,"abstract":"This paper represents an text extraction method from Google maps, GIS maps/images. Due to an unsupervised approach there is no requirement of any prior knowledge or training set about the textual and non-textual parts. Fuzzy C-Means clustering technique is used for image segmentation and Prewitt method is used to detect the edges. Connected component analysis and gridding technique enhance the correctness of the results. The proposed method reaches 98.5% accuracy level on the basis of experimental data sets.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122521132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}