Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334064
W. Chin, C. Loo, N. Kubota
This paper presents a new network for environment learning and online topological map building. It comprises two layers: input and memory. The input layer collects sensory information and incrementally categorizes the obtained information into a set of topological nodes. In the memory layer, edges are connect clustered information (nodes) to form a topological map. Edges store robot's actions and bearing. The advantages of the proposed method are: 1) it represents multiple places using multidimensional Gaussian distribution and does not require prior knowledge to make it work in a natural environment; 2) it can process more than one sensory source simultaneously in continuous space during robot navigation; and 3) it is an incremental and using Bayes' decision theory for learning and inference. Finally, the proposed method was validated using several standardized benchmark datasets.
{"title":"Multi-channel Bayesian adaptive resonance associative memory for environment learning and topological map building","authors":"W. Chin, C. Loo, N. Kubota","doi":"10.1109/ICIEV.2015.7334064","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334064","url":null,"abstract":"This paper presents a new network for environment learning and online topological map building. It comprises two layers: input and memory. The input layer collects sensory information and incrementally categorizes the obtained information into a set of topological nodes. In the memory layer, edges are connect clustered information (nodes) to form a topological map. Edges store robot's actions and bearing. The advantages of the proposed method are: 1) it represents multiple places using multidimensional Gaussian distribution and does not require prior knowledge to make it work in a natural environment; 2) it can process more than one sensory source simultaneously in continuous space during robot navigation; and 3) it is an incremental and using Bayes' decision theory for learning and inference. Finally, the proposed method was validated using several standardized benchmark datasets.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127052337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334031
T. Tsujimura, Kosuke Urata, K. Izumi
This paper describes classification techniques to distinguish hand signs based only on electromyogram signals of a forearm. Relationship between finger gesture and forearm electromyogram is investigated by two signal processing approaches; an empirical thresholding method and meta heuristic method. The former method judges muscle activity according to the criteria experimentally determined in advance, and evaluates activity pattern of muscles. The latter learns the electromyogram characteristics and automatically creates classification algorithm applying genetic programming. Discrimination experiments of typical hand signs are carried out to evaluate the effectiveness of the proposed methods.
{"title":"Hand sign classification techniques based on forearm electromyogram signals","authors":"T. Tsujimura, Kosuke Urata, K. Izumi","doi":"10.1109/ICIEV.2015.7334031","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334031","url":null,"abstract":"This paper describes classification techniques to distinguish hand signs based only on electromyogram signals of a forearm. Relationship between finger gesture and forearm electromyogram is investigated by two signal processing approaches; an empirical thresholding method and meta heuristic method. The former method judges muscle activity according to the criteria experimentally determined in advance, and evaluates activity pattern of muscles. The latter learns the electromyogram characteristics and automatically creates classification algorithm applying genetic programming. Discrimination experiments of typical hand signs are carried out to evaluate the effectiveness of the proposed methods.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115184557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334042
H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima
This paper describes an object extraction method based on plane detection to extract an unknown object for service robots that use a depth sensor. Recently, depth sensors are used to perceive 3D space in an environment. In robot perception, a depth sensor have used for perceiving unknown environment, such as surface reconstruction, model fitting and so on. Point Cloud Library is famous open source library to deal with 3D point cloud data. However, robot perception for grasping have limitations with high computational costs and low-accuracy for perceiving small objects. Therefore, we proposed the PSO-based plane detection method with RG and the object extraction method based on geometric invariance. To verify accuracy and computational cost for unknown object extraction, we have compared the proposed method with PCL. As an experimental result, we show that the proposed method has higher accuracy and less computational cost drastically for an unknown object extraction.
{"title":"Unknown object extraction for robot partner using depth sensor","authors":"H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima","doi":"10.1109/ICIEV.2015.7334042","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334042","url":null,"abstract":"This paper describes an object extraction method based on plane detection to extract an unknown object for service robots that use a depth sensor. Recently, depth sensors are used to perceive 3D space in an environment. In robot perception, a depth sensor have used for perceiving unknown environment, such as surface reconstruction, model fitting and so on. Point Cloud Library is famous open source library to deal with 3D point cloud data. However, robot perception for grasping have limitations with high computational costs and low-accuracy for perceiving small objects. Therefore, we proposed the PSO-based plane detection method with RG and the object extraction method based on geometric invariance. To verify accuracy and computational cost for unknown object extraction, we have compared the proposed method with PCL. As an experimental result, we show that the proposed method has higher accuracy and less computational cost drastically for an unknown object extraction.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115442501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7333975
Yinn Xi Boon, S. I. Ch'ng
Face images with visual variations can significantly influence the performance of a face recognition system. Euler Principal Component Analysis (e-PCA) uses a dissimilarity measure to increase the differences between subjects even though the face images are under the influence of visual variation. Previous experiments show that e-PCA is particularly effective in reconstructing occluded face images. Thus, in this paper, we investigate if e-PCA can be used to solve the problem of visual variation in face recognition by using the reconstructed face images for the classification process. Different classifiers are also used in our investigation to examine the effect of the reconstructed face image data on the process. Experiments are done on ORL, AR and Yale face databases and it shows that there are improvements in the recognition rate using e-PCA under certain circumstances.
{"title":"Face recognition using Euler Principal Component Analysis","authors":"Yinn Xi Boon, S. I. Ch'ng","doi":"10.1109/ICIEV.2015.7333975","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7333975","url":null,"abstract":"Face images with visual variations can significantly influence the performance of a face recognition system. Euler Principal Component Analysis (e-PCA) uses a dissimilarity measure to increase the differences between subjects even though the face images are under the influence of visual variation. Previous experiments show that e-PCA is particularly effective in reconstructing occluded face images. Thus, in this paper, we investigate if e-PCA can be used to solve the problem of visual variation in face recognition by using the reconstructed face images for the classification process. Different classifiers are also used in our investigation to examine the effect of the reconstructed face image data on the process. Experiments are done on ORL, AR and Yale face databases and it shows that there are improvements in the recognition rate using e-PCA under certain circumstances.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115659435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334034
A. Nijholt
Humor is important in our daily life, whether our activities are at home, at work, or in public spaces, for example during sports or other recreational and entertainment activities. Until now, computational humor, the research area that investigates rules and algorithms to understand and to generate humor, has only looked at verbal humor and in particular puns (word play) and jokes. However, nowadays, humor has to be understood when it appears in digital audiovisual media, or in interactive virtual environments (game environments), or with the help of smart and interactive objects and devices, including wearables. In this paper we discuss the characteristics of the various media and environments in which humor can emerge. The goal however, is to make clear that future smart environments can facilitate humorous event creation by its human partners and can take the initiative to generate humor.
{"title":"The humor continuum: From text to smart environments (keynote paper)","authors":"A. Nijholt","doi":"10.1109/ICIEV.2015.7334034","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334034","url":null,"abstract":"Humor is important in our daily life, whether our activities are at home, at work, or in public spaces, for example during sports or other recreational and entertainment activities. Until now, computational humor, the research area that investigates rules and algorithms to understand and to generate humor, has only looked at verbal humor and in particular puns (word play) and jokes. However, nowadays, humor has to be understood when it appears in digital audiovisual media, or in interactive virtual environments (game environments), or with the help of smart and interactive objects and devices, including wearables. In this paper we discuss the characteristics of the various media and environments in which humor can emerge. The goal however, is to make clear that future smart environments can facilitate humorous event creation by its human partners and can take the initiative to generate humor.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"18 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120860960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7333973
Nutnicha Maneesaeng, P. Punyabukkana, A. Suchato
Video-call applications may not be originally designed to accommodate the blinds since they are not the main target users. However, this kind of application can be utilized as a tele-assistance solution for the assistants to help the blind navigate or perform any task that require sight. We field tested a popular Video-call application, Line, by blind users while communicating with their assistants for help in various tasks and found several drawbacks that must be overcome in order to make the application usable by the two parties. In this work, we focus on the fact that blind users lack the capability to video necessary frames that their assistants must see in order to help the blinds with sight-demanded tasks. They, therefore, may have to take the video repeatedly by waving the camera rather randomly until necessary frames appear. We propose an algorithm to construct scenes from remotely-recorded video frames that produce wide-angle or panoramic type of image, particularly for the assistants. This feature is integrated with our video-call system using WebRTC technology. Six assistant volunteers tested the proposed system by comparing it with existing video-call application and found 4.17 out of 5 satisfaction rate.
{"title":"Tele-assistance system for the blinds using Video-call with remote scene construction","authors":"Nutnicha Maneesaeng, P. Punyabukkana, A. Suchato","doi":"10.1109/ICIEV.2015.7333973","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7333973","url":null,"abstract":"Video-call applications may not be originally designed to accommodate the blinds since they are not the main target users. However, this kind of application can be utilized as a tele-assistance solution for the assistants to help the blind navigate or perform any task that require sight. We field tested a popular Video-call application, Line, by blind users while communicating with their assistants for help in various tasks and found several drawbacks that must be overcome in order to make the application usable by the two parties. In this work, we focus on the fact that blind users lack the capability to video necessary frames that their assistants must see in order to help the blinds with sight-demanded tasks. They, therefore, may have to take the video repeatedly by waving the camera rather randomly until necessary frames appear. We propose an algorithm to construct scenes from remotely-recorded video frames that produce wide-angle or panoramic type of image, particularly for the assistants. This feature is integrated with our video-call system using WebRTC technology. Six assistant volunteers tested the proposed system by comparing it with existing video-call application and found 4.17 out of 5 satisfaction rate.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123331392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334006
A. K. Saha, Md. Firoz Mridha, Molla Rashied Hussein, J. Das
In this paper, the design and implementation of Bangla DeConverter for DeConverting Universal Networking Language (UNL) expressions into the Bangla Language is propounded. The UNL is an Artificial Language, which not only facilitates the translation stratagem between all the Natural Languages across the world, but also proffers the unification of those Natural Languages as well. DeConverter is the core software contrivance in a UNL system. The paper also focuses on the Linguistic Analysis of Bangla Language for the DeConversion process. A set of DeConversion rules have been burgeoned for converting UNL expression to Bangla. Experimental result shows that these rules successfully generate correct Bangla text from UNL expressions. These rules can currently produce basic and simple Bangla sentences; however, it is being aggrandized to superintend advanced and complex sentences.
{"title":"Design and implementation of an efficient DeConverter for generating Bangla sentences from UNL expression","authors":"A. K. Saha, Md. Firoz Mridha, Molla Rashied Hussein, J. Das","doi":"10.1109/ICIEV.2015.7334006","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334006","url":null,"abstract":"In this paper, the design and implementation of Bangla DeConverter for DeConverting Universal Networking Language (UNL) expressions into the Bangla Language is propounded. The UNL is an Artificial Language, which not only facilitates the translation stratagem between all the Natural Languages across the world, but also proffers the unification of those Natural Languages as well. DeConverter is the core software contrivance in a UNL system. The paper also focuses on the Linguistic Analysis of Bangla Language for the DeConversion process. A set of DeConversion rules have been burgeoned for converting UNL expression to Bangla. Experimental result shows that these rules successfully generate correct Bangla text from UNL expressions. These rules can currently produce basic and simple Bangla sentences; however, it is being aggrandized to superintend advanced and complex sentences.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121083872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7333982
A. Abilgaziyev, T. Kulzhan, N. Raissov, Md. Hazrat Ali, W. L. K. Match, N. Mir-Nasiri
3D printing or additive manufacturing is a process of producing three-dimensional solid objects from a software design. Color and material limitations for simultaneous usage, and relatively low printing speeds are the major problems of fused filament fabrication (FFF). In this study, an extrusion model with five nozzles is proposed to address the current deficiencies. The proposed extrusion model enables printing with five different colors and materials simultaneously without stopping the operational process while switching the filaments. The major advantage of the proposed model is that, the tailor made lightweight hot-end extruder is driven by only two motors. The proposed extrusion model provides a novel technique for 3D printing with multi-color and multi-material.
{"title":"Design and development of multi-nozzle extrusion system for 3D printer","authors":"A. Abilgaziyev, T. Kulzhan, N. Raissov, Md. Hazrat Ali, W. L. K. Match, N. Mir-Nasiri","doi":"10.1109/ICIEV.2015.7333982","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7333982","url":null,"abstract":"3D printing or additive manufacturing is a process of producing three-dimensional solid objects from a software design. Color and material limitations for simultaneous usage, and relatively low printing speeds are the major problems of fused filament fabrication (FFF). In this study, an extrusion model with five nozzles is proposed to address the current deficiencies. The proposed extrusion model enables printing with five different colors and materials simultaneously without stopping the operational process while switching the filaments. The major advantage of the proposed model is that, the tailor made lightweight hot-end extruder is driven by only two motors. The proposed extrusion model provides a novel technique for 3D printing with multi-color and multi-material.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126205502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334009
T. Obo, H. Kakudi, Yuri Yoshihara, C. Loo, N. Kubota
As the number of elderly people living alone increases, more caregivers are required to support the aging society. Such elderly people have little chances to communicate with other people and are likely to be socially isolated. The monitoring system is one of possible solutions to confirm the safety of elderly people. Sensor network or portable sensing devices can be applied to such monitoring systems. However, basically, such systems are unilateral systems to just observe human states. It is important for elderly people and their families to create opportunities for communicating with each other. Visualization based on lifelogging is one of the important and effective techniques implored to understand and share personal preference and lifestyle. If the elderly's family members can share their hobbies, diversions and lifestyle, then they can easily select common topics to discuss and ensue communication. In this study, we develop a visualization system to represent personal relation between elderly people and their family members based on their daily activities. Moreover, this paper proposes a method of topological visualization based on the spring-mass-damper system (Spring Model).
{"title":"Lifelog visualization for elderly health care in Informationally Structured Space","authors":"T. Obo, H. Kakudi, Yuri Yoshihara, C. Loo, N. Kubota","doi":"10.1109/ICIEV.2015.7334009","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334009","url":null,"abstract":"As the number of elderly people living alone increases, more caregivers are required to support the aging society. Such elderly people have little chances to communicate with other people and are likely to be socially isolated. The monitoring system is one of possible solutions to confirm the safety of elderly people. Sensor network or portable sensing devices can be applied to such monitoring systems. However, basically, such systems are unilateral systems to just observe human states. It is important for elderly people and their families to create opportunities for communicating with each other. Visualization based on lifelogging is one of the important and effective techniques implored to understand and share personal preference and lifestyle. If the elderly's family members can share their hobbies, diversions and lifestyle, then they can easily select common topics to discuss and ensue communication. In this study, we develop a visualization system to represent personal relation between elderly people and their family members based on their daily activities. Moreover, this paper proposes a method of topological visualization based on the spring-mass-damper system (Spring Model).","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127379032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334055
Takaaki Oka, M. Morimoto
To realize object recognition in practical applications, it is necessary to handle layered objects. In this paper we introduce RGB-D sensor for object recognition system to treat overlapped objects. In the proposed method, we divide objects into segments and merge them by consulting partially recognition scheme. By several experiments, out method can recognize 30% occluded object with enough accuracy.
{"title":"An extraction and recognition method for partially hidden objects","authors":"Takaaki Oka, M. Morimoto","doi":"10.1109/ICIEV.2015.7334055","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334055","url":null,"abstract":"To realize object recognition in practical applications, it is necessary to handle layered objects. In this paper we introduce RGB-D sensor for object recognition system to treat overlapped objects. In the proposed method, we divide objects into segments and merge them by consulting partially recognition scheme. By several experiments, out method can recognize 30% occluded object with enough accuracy.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116739295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}