Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708363
R. Moriwaki, Minora Watanabe
Recently, demands for implementation of a highspeed image recognition function onto autonomous vehicles and robots, that is superior to that of the human eye, have been increasing. To date, analog-type vision chips and digital vision chips have been developed. Nevertheless, even now, realizing such high-speed real-time image recognition operation is extremely difficult because the template information transfer rate and template matching operation cycle reach the order of Petapixel/s. Therefore, to accommodate template matching operations that can be executed at rates greater than Petapixel/s, a dynamically reconfigurable vision-chip architecture has been developed in which a holographic memory technique is introduced to current VLSI technology. However, the dynamically reconfigurable vision-chip architecture must receive image information in addition to configuration context information. At such a time, a salient concern is that image information light might reduce the retention time of photodiode memories on a dynamically reconfigurable vision-chip. This paper therefore clarifies that the background light does not affect the photodiode memories on a dynamically reconfigurable vision-chip architecture.
{"title":"Background light effect of a dynamically reconfigurable vision-chip architecture","authors":"R. Moriwaki, Minora Watanabe","doi":"10.1109/SII.2010.5708363","DOIUrl":"https://doi.org/10.1109/SII.2010.5708363","url":null,"abstract":"Recently, demands for implementation of a highspeed image recognition function onto autonomous vehicles and robots, that is superior to that of the human eye, have been increasing. To date, analog-type vision chips and digital vision chips have been developed. Nevertheless, even now, realizing such high-speed real-time image recognition operation is extremely difficult because the template information transfer rate and template matching operation cycle reach the order of Petapixel/s. Therefore, to accommodate template matching operations that can be executed at rates greater than Petapixel/s, a dynamically reconfigurable vision-chip architecture has been developed in which a holographic memory technique is introduced to current VLSI technology. However, the dynamically reconfigurable vision-chip architecture must receive image information in addition to configuration context information. At such a time, a salient concern is that image information light might reduce the retention time of photodiode memories on a dynamically reconfigurable vision-chip. This paper therefore clarifies that the background light does not affect the photodiode memories on a dynamically reconfigurable vision-chip architecture.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123841921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708342
Zhaojia Liu, B. G. Lounell, J. Ota
This paper presents a work on fast robotic grasping by a mobile robot based on partial shape information of an object. The information is acquired by a laser range finder installed on the robot at an inclined angle. Feature data are extracted by scanning part of the object to calculate the grasping point. The robot grasps the object without needing to acquire and process all of the object information. An experiment is conducted, and the result illustrates the validity of the proposed method.
{"title":"Robotic grasping based on partial shape information","authors":"Zhaojia Liu, B. G. Lounell, J. Ota","doi":"10.1109/SII.2010.5708342","DOIUrl":"https://doi.org/10.1109/SII.2010.5708342","url":null,"abstract":"This paper presents a work on fast robotic grasping by a mobile robot based on partial shape information of an object. The information is acquired by a laser range finder installed on the robot at an inclined angle. Feature data are extracted by scanning part of the object to calculate the grasping point. The robot grasps the object without needing to acquire and process all of the object information. An experiment is conducted, and the result illustrates the validity of the proposed method.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130384530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708292
Lei Hou, S. Kagami, K. Hashimoto
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. Although the influences of background light and the amplitude fluctuation have been successfully eliminated, it is still necessary to distinguish the frame indexes of multiple synchronized vision sensors. In this paper, an illumination-based synchronization derived from the phase-locked loop (PLL) mechanism based on the Manchester Encoding technique is proposed and evaluated. The blinking illumination signal can carry the sequential information by being arbitrarily modulated as pseudo random sequence. Simulated results demonstrated the successful synchronization result that 1,000-Hz frame rate vision sensors can be successfully synchronized to a LED illumination modulated to be 250 Hz with satisfactory stability and jitters.
{"title":"Frame indexing of the illumination-based synchronized high-speed vision sensors","authors":"Lei Hou, S. Kagami, K. Hashimoto","doi":"10.1109/SII.2010.5708292","DOIUrl":"https://doi.org/10.1109/SII.2010.5708292","url":null,"abstract":"To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. Although the influences of background light and the amplitude fluctuation have been successfully eliminated, it is still necessary to distinguish the frame indexes of multiple synchronized vision sensors. In this paper, an illumination-based synchronization derived from the phase-locked loop (PLL) mechanism based on the Manchester Encoding technique is proposed and evaluated. The blinking illumination signal can carry the sequential information by being arbitrarily modulated as pseudo random sequence. Simulated results demonstrated the successful synchronization result that 1,000-Hz frame rate vision sensors can be successfully synchronized to a LED illumination modulated to be 250 Hz with satisfactory stability and jitters.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127569023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708359
Lope Ben Porquis, M. Konyo, S. Tadokoro
Perception of minute force direction through tactile sensations during tool manipulation is an important factor for humans in skill acquisition. Different pressure levels on finger contacts could be responsible factors pertaining to the perception of force direction. In this paper, an experimental study was done to verify if pressure stimulation pattern applied to the thumb and fingers on a gripping position could produce a sense of force direction. Six participants performed a force direction discrimination experiment by holding a grounded pen type interface which induces pressure sensation using air suction technique. Experimental results showed that participants felt three distinct force directions from applied pressure stimulation patterns. It was verified in this experiment that the feasibility of applying different pressure levels at skin contact locations on a pen grip position can produce a sensation of force directions.
{"title":"Can multiple tactile pressure stimulation in gripping position induce virtual force directions?","authors":"Lope Ben Porquis, M. Konyo, S. Tadokoro","doi":"10.1109/SII.2010.5708359","DOIUrl":"https://doi.org/10.1109/SII.2010.5708359","url":null,"abstract":"Perception of minute force direction through tactile sensations during tool manipulation is an important factor for humans in skill acquisition. Different pressure levels on finger contacts could be responsible factors pertaining to the perception of force direction. In this paper, an experimental study was done to verify if pressure stimulation pattern applied to the thumb and fingers on a gripping position could produce a sense of force direction. Six participants performed a force direction discrimination experiment by holding a grounded pen type interface which induces pressure sensation using air suction technique. Experimental results showed that participants felt three distinct force directions from applied pressure stimulation patterns. It was verified in this experiment that the feasibility of applying different pressure levels at skin contact locations on a pen grip position can produce a sensation of force directions.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708365
Akihiko Hata, K. Ohno, E. Takeuchi, S. Tadokoro, Ken Sakurada, Naoki Miyahra, K. Higashi
The authors are researching about Three dimensional mapping using mobile robot and 3-D laser scanner. We developed 3-D laser scanner which can measure whole 3-D shape with a combination of 2-D laser scanner and Pan-Tilt base. However, measuring surrounding areas using the scanner on mobile robot, there are some areas where can't measure whole 3-D shape caused by overview camera and wireless LAN's aerial mounted on a robot. In this paper, 3-D laser scanner can measure uniform and whole 3-D shape, which tilts constant value around pitch axis and then rotates one revolution around yaw axis. We propose measuring method that decrease effects of hidden objects by making offset between 2-D laser scan plane and pitch axis.
{"title":"Development of a laser scan method to decrease hidden areas caused by objects like pole at whole 3-D shape measurement","authors":"Akihiko Hata, K. Ohno, E. Takeuchi, S. Tadokoro, Ken Sakurada, Naoki Miyahra, K. Higashi","doi":"10.1109/SII.2010.5708365","DOIUrl":"https://doi.org/10.1109/SII.2010.5708365","url":null,"abstract":"The authors are researching about Three dimensional mapping using mobile robot and 3-D laser scanner. We developed 3-D laser scanner which can measure whole 3-D shape with a combination of 2-D laser scanner and Pan-Tilt base. However, measuring surrounding areas using the scanner on mobile robot, there are some areas where can't measure whole 3-D shape caused by overview camera and wireless LAN's aerial mounted on a robot. In this paper, 3-D laser scanner can measure uniform and whole 3-D shape, which tilts constant value around pitch axis and then rotates one revolution around yaw axis. We propose measuring method that decrease effects of hidden objects by making offset between 2-D laser scan plane and pitch axis.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114056442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-05DOI: 10.1109/SII.2010.5708364
S. Kubota, M. Watanabe
Recently, as one dynamic reconfigurable device, optically reconfigurable gate arrays (ORGAs) that consist of a gate array VLSI, a holographic memory, and a laser array have been developed to achieve greater than 1 Teragate virtual integration, which is much greater than the integration that is possible using currently available VLSIs. If the ORGA can be used, a large software system can be implemented directly as hardware so that a large real-time system can be realized. Currently, a programmable ORGA architecture has been proposed to support user programmability. This paper presents the demonstration result of a multi-context programmable ORGA using a silver-halide non-volatile holographic memory and a corresponding writer system.
{"title":"Multi-context programmable optically reconfigurable gate array using a silver-halide holographic memory","authors":"S. Kubota, M. Watanabe","doi":"10.1109/SII.2010.5708364","DOIUrl":"https://doi.org/10.1109/SII.2010.5708364","url":null,"abstract":"Recently, as one dynamic reconfigurable device, optically reconfigurable gate arrays (ORGAs) that consist of a gate array VLSI, a holographic memory, and a laser array have been developed to achieve greater than 1 Teragate virtual integration, which is much greater than the integration that is possible using currently available VLSIs. If the ORGA can be used, a large software system can be implemented directly as hardware so that a large real-time system can be realized. Currently, a programmable ORGA architecture has been proposed to support user programmability. This paper presents the demonstration result of a multi-context programmable ORGA using a silver-halide non-volatile holographic memory and a corresponding writer system.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"18 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121004774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}