F. Kobayashi, Keiichi Kitabayashi, Hiroyuki Nakamoto, F. Kojima
The multi-fingered robot hand has much attention in various fields. Many robot hands have been proposed so far and we have developed a hand/arm robot with universal robot hand II. A teleoperation system allows intuitive manipulation of the hand/arm robot. Here, a motion capture system of measuring human motion is used for operating the robot remotely. Various types of the human motion capture have been developed so far. This paper deals with a motion capture system with inertial measurement units (IMUs) and a hand/arm teleoperation with the inertial motion capture.
{"title":"Hand/Arm Robot Teleoperation by Inertial Motion Capture","authors":"F. Kobayashi, Keiichi Kitabayashi, Hiroyuki Nakamoto, F. Kojima","doi":"10.1109/RVSP.2013.60","DOIUrl":"https://doi.org/10.1109/RVSP.2013.60","url":null,"abstract":"The multi-fingered robot hand has much attention in various fields. Many robot hands have been proposed so far and we have developed a hand/arm robot with universal robot hand II. A teleoperation system allows intuitive manipulation of the hand/arm robot. Here, a motion capture system of measuring human motion is used for operating the robot remotely. Various types of the human motion capture have been developed so far. This paper deals with a motion capture system with inertial measurement units (IMUs) and a hand/arm teleoperation with the inertial motion capture.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"16 1","pages":"234-237"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81414951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihsin Ho, T. Nishitani, Toru Yamaguchi, E. Sato-Shimokawara, N. Tagawa
This paper proposes a hand gesture recognition system for human-robot interface. Our research aims to provide users user-friendly operations in a more intuitive manner. We use the stereo camera to capture images as the primary source of information retrieval, and adapt Gaussian mixture model (GMM) method as the main method of image analysis. The GMM method we applied in this paper is a precise, stable and computationally efficient foreground segment method. Our system is mainly with the following three steps: take video by camera, obtain user's images based on GMM method, and recognize hand gesture. In this paper, we will focus on describing the system's overall concepts and GMM method. An experiment result of our prototype will also be discussed to show the research potential of our system.
{"title":"A Hand Gesture Recognition System Based on GMM Method for Human-Robot Interface","authors":"Yihsin Ho, T. Nishitani, Toru Yamaguchi, E. Sato-Shimokawara, N. Tagawa","doi":"10.1109/RVSP.2013.72","DOIUrl":"https://doi.org/10.1109/RVSP.2013.72","url":null,"abstract":"This paper proposes a hand gesture recognition system for human-robot interface. Our research aims to provide users user-friendly operations in a more intuitive manner. We use the stereo camera to capture images as the primary source of information retrieval, and adapt Gaussian mixture model (GMM) method as the main method of image analysis. The GMM method we applied in this paper is a precise, stable and computationally efficient foreground segment method. Our system is mainly with the following three steps: take video by camera, obtain user's images based on GMM method, and recognize hand gesture. In this paper, we will focus on describing the system's overall concepts and GMM method. An experiment result of our prototype will also be discussed to show the research potential of our system.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"35 1","pages":"291-294"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86764126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we devise a method to detect robbery and violent scenarios for the goal of improving the security of self-service banks. The method first extracts the motion region from the video and denotes this region with a rectangle. Then, the method calculates the optical flow and energy of the rectangular region. The method takes the length and width of the rectangle, the energy, and the orientation variance of the motion region which is denoted by the same rectangle as features to distinguish the video where robbery and violent segments occur from other videos. The experimental results from a number of surveillance videos show that our devised method is feasible and can achieve a very good performance.
{"title":"Detecting Robbery and Violent Scenarios","authors":"Yong Xu, J. Wen","doi":"10.1109/RVSP.2013.14","DOIUrl":"https://doi.org/10.1109/RVSP.2013.14","url":null,"abstract":"In this paper, we devise a method to detect robbery and violent scenarios for the goal of improving the security of self-service banks. The method first extracts the motion region from the video and denotes this region with a rectangle. Then, the method calculates the optical flow and energy of the rectangular region. The method takes the length and width of the rectangle, the energy, and the orientation variance of the motion region which is denoted by the same rectangle as features to distinguish the video where robbery and violent segments occur from other videos. The experimental results from a number of surveillance videos show that our devised method is feasible and can achieve a very good performance.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"5 1","pages":"25-30"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87905308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we develop a hybrid fuzzy semi supervised learning algorithm (HFSA) for face recognition, which is based on the segregation of distinctive regions that include outlier instances and its counterparts. First, it achieves the distribution information of each sample that represented with fuzzy membership degree, and then the membership grade is incorporated into the redefinition of scatter matrices, as a result, the initial fuzzy classification of whole regular feature space is obtained. Second, a new semi-supervised fuzzy clustering algorithm is presented on the basis of the precise number of clusters and initial pattern centers that have been previously obtained in the pattern discovery stage, and then applied in order to perform the outlier instances classification, yielding the final pattern recognition. Experimental results conducted on the ORL and XM2VTS face databases demonstrate the effectiveness of the proposed method.
{"title":"A Hybrid Fuzzy Semi-supervised Learning Algorithm for Face Recognition","authors":"Xiaoning Song, Zi Liu","doi":"10.1109/RVSP.2013.32","DOIUrl":"https://doi.org/10.1109/RVSP.2013.32","url":null,"abstract":"In this paper, we develop a hybrid fuzzy semi supervised learning algorithm (HFSA) for face recognition, which is based on the segregation of distinctive regions that include outlier instances and its counterparts. First, it achieves the distribution information of each sample that represented with fuzzy membership degree, and then the membership grade is incorporated into the redefinition of scatter matrices, as a result, the initial fuzzy classification of whole regular feature space is obtained. Second, a new semi-supervised fuzzy clustering algorithm is presented on the basis of the precise number of clusters and initial pattern centers that have been previously obtained in the pattern discovery stage, and then applied in order to perform the outlier instances classification, yielding the final pattern recognition. Experimental results conducted on the ORL and XM2VTS face databases demonstrate the effectiveness of the proposed method.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"58-60 1","pages":"111-114"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77188784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a great challenge for face recognition with single training sample per person. In this paper, we try to propose a new algorithm based sparse representation to solve this problem. The algorithm takes the two-dimensional training samples as the training set directly rather than image vectors. So we can obtain the dictionary of sparse representation only using one sample. The proposed algorithm includes training process and classification process. In training process all the class's dictionaries have been trained using KSVD algorithm. In classification process, the test sample has been projected to every trained dictionary, and then computes the reconstruction residual. At last the test sample is classified to the one who can get the minimum reconstruction residual. Experimental results show that the proposed method is efficient and it can achieve higher recognition accuracy than many existing schemes.
{"title":"Face Recognition with Single Training Sample per Person Using Sparse Representation","authors":"Wei Huang, Xiaohui Wang, Zhong Jin","doi":"10.1109/RVSP.2013.26","DOIUrl":"https://doi.org/10.1109/RVSP.2013.26","url":null,"abstract":"It is a great challenge for face recognition with single training sample per person. In this paper, we try to propose a new algorithm based sparse representation to solve this problem. The algorithm takes the two-dimensional training samples as the training set directly rather than image vectors. So we can obtain the dictionary of sparse representation only using one sample. The proposed algorithm includes training process and classification process. In training process all the class's dictionaries have been trained using KSVD algorithm. In classification process, the test sample has been projected to every trained dictionary, and then computes the reconstruction residual. At last the test sample is classified to the one who can get the minimum reconstruction residual. Experimental results show that the proposed method is efficient and it can achieve higher recognition accuracy than many existing schemes.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"30 1","pages":"84-88"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87287066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To inspect prescription drugs with press-through package (PTP), we propose an automated inspection system which based on computer vision. In the proposed system, we capture PTP drugs and apply hierarchical identification consist of several weak classifiers. In this paper, we report several results of inspection experiments which distinguish about a thousand kinds of PTPs. As a result, we have achieved sufficient recognition rate and processing time.
{"title":"A Visual Inspection System for Prescription Drugs in Press through Package","authors":"T. Murai, M. Morimoto","doi":"10.1109/RVSP.2013.18","DOIUrl":"https://doi.org/10.1109/RVSP.2013.18","url":null,"abstract":"To inspect prescription drugs with press-through package (PTP), we propose an automated inspection system which based on computer vision. In the proposed system, we capture PTP drugs and apply hierarchical identification consist of several weak classifiers. In this paper, we report several results of inspection experiments which distinguish about a thousand kinds of PTPs. As a result, we have achieved sufficient recognition rate and processing time.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"56 1","pages":"43-46"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90906645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we integrate CC2530 with Medical System based on ZigBee network technology. Bio-sensor and CC2530 chip in one module, which collects patient biosignal data and transmit the data to Medical Server by ZigBee network. The biosignal data in Medical System will be analyzed and displayed in visual platform. We focus on using the cluster tree ZigBee network to improve consumption of wireless transmission and solve the wired network disadvantage.
{"title":"Wireless Medical System Using ZigBee Network Communication","authors":"Shu-Cheng Gu, Hsien Lung Chuan, Shu-Hua Wang","doi":"10.1109/RVSP.2013.69","DOIUrl":"https://doi.org/10.1109/RVSP.2013.69","url":null,"abstract":"In this paper, we integrate CC2530 with Medical System based on ZigBee network technology. Bio-sensor and CC2530 chip in one module, which collects patient biosignal data and transmit the data to Medical Server by ZigBee network. The biosignal data in Medical System will be analyzed and displayed in visual platform. We focus on using the cluster tree ZigBee network to improve consumption of wireless transmission and solve the wired network disadvantage.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"55 1","pages":"278-281"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88806210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To make a decision in companies or public organizations, the priority ordering plays an essential. For example, their discussion is essential for stakeholder to achieve mutual consensus,. In the discussion, the difference among consensus building processes can affect the last conclusion. Therefore, it is necessary for analysis to find critical remarks reaching the consensus ('hfocus remark'h). However, it is a basis to confirm the gfocus remark'h that the consensus building process can understand exactly from the disagreement state consent and detailed exposition parties. The consensus discussion is very helpful to promote interaction by the speech. The paper addresses the design of recognition system and results are achieved by means of MFCC (Mel Frequency Campestral Coefficients) and HMM (Hidden Markov Model). Results in recognition of six emotion patterns obtained 86.8% recognition rate. According to the relation of emotional states and emotions we analyzed the support more objectively.
{"title":"Building a Recognition System of Speech Emotion and Emotional States","authors":"Xiaoyan Feng, J. Watada","doi":"10.1109/RVSP.2013.64","DOIUrl":"https://doi.org/10.1109/RVSP.2013.64","url":null,"abstract":"To make a decision in companies or public organizations, the priority ordering plays an essential. For example, their discussion is essential for stakeholder to achieve mutual consensus,. In the discussion, the difference among consensus building processes can affect the last conclusion. Therefore, it is necessary for analysis to find critical remarks reaching the consensus ('hfocus remark'h). However, it is a basis to confirm the gfocus remark'h that the consensus building process can understand exactly from the disagreement state consent and detailed exposition parties. The consensus discussion is very helpful to promote interaction by the speech. The paper addresses the design of recognition system and results are achieved by means of MFCC (Mel Frequency Campestral Coefficients) and HMM (Hidden Markov Model). Results in recognition of six emotion patterns obtained 86.8% recognition rate. According to the relation of emotional states and emotions we analyzed the support more objectively.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"18 1","pages":"253-258"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85902750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a novel feature extraction algorithm, called Joint Discriminant Sparse Neighborhood Preserving Embedding (JDSNPE), based on Discriminant Sparse Neighborhood Preserving Embedding (DSNPE) and joint learning is proposed. JDSNPE aims to get the row sparsity of the transformation matrix while preserving discriminant sparse neighborhood. Experimental results on Yale database demonstrate the effectiveness of the proposed algorithm compared to Sparse Neighborhood Preserving Embedding and DSNPE.
{"title":"A Novel Feature Extraction Algorithm Based on Joint Learning","authors":"Jeng-Shyang Pan, Lijun Yan, Zongguang Fang","doi":"10.1109/RVSP.2013.15","DOIUrl":"https://doi.org/10.1109/RVSP.2013.15","url":null,"abstract":"In this paper, a novel feature extraction algorithm, called Joint Discriminant Sparse Neighborhood Preserving Embedding (JDSNPE), based on Discriminant Sparse Neighborhood Preserving Embedding (DSNPE) and joint learning is proposed. JDSNPE aims to get the row sparsity of the transformation matrix while preserving discriminant sparse neighborhood. Experimental results on Yale database demonstrate the effectiveness of the proposed algorithm compared to Sparse Neighborhood Preserving Embedding and DSNPE.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"10 1","pages":"31-34"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87446285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Hazrat Ali, S. Kurokawa, Kensuke Uesugi, Takashi Teraoka
This paper presents the developed software configurations and control strategy of 3D measurement probe during measurement of gear profile. The total system consists of a web camera, 3D probe, work piece holder and integrated computer system through which complete control is performed. It mainly highlights the developed software features and discusses the control strategy of the 3D measurement probe in order to keep the measurement probe always align with its correct position. The system is able to record the displacement of the 3D probe in terms of X and Y coordinates value. Vision based measurement is very useful to increase the performance of the measurement. It can help to analyze the measurement result after the complete measurement is accomplished. The system also records video and saves image frames in real-time and also it's able to open the video file in offline mode. In this paper, a vision based control theory is proposed mainly for the surface error measurement of various types of gears.
{"title":"Camera Based 3D Probe Control in Measuring Gear Profile","authors":"Md. Hazrat Ali, S. Kurokawa, Kensuke Uesugi, Takashi Teraoka","doi":"10.1109/RVSP.2013.62","DOIUrl":"https://doi.org/10.1109/RVSP.2013.62","url":null,"abstract":"This paper presents the developed software configurations and control strategy of 3D measurement probe during measurement of gear profile. The total system consists of a web camera, 3D probe, work piece holder and integrated computer system through which complete control is performed. It mainly highlights the developed software features and discusses the control strategy of the 3D measurement probe in order to keep the measurement probe always align with its correct position. The system is able to record the displacement of the 3D probe in terms of X and Y coordinates value. Vision based measurement is very useful to increase the performance of the measurement. It can help to analyze the measurement result after the complete measurement is accomplished. The system also records video and saves image frames in real-time and also it's able to open the video file in offline mode. In this paper, a vision based control theory is proposed mainly for the surface error measurement of various types of gears.","PeriodicalId":6585,"journal":{"name":"2013 Second International Conference on Robot, Vision and Signal Processing","volume":"77 1","pages":"242-246"},"PeriodicalIF":0.0,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79477032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}