Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687332
Qifeng Qi, R. Du
This paper presents a vision based micro-assembly system, which is designed to assemble various components in mechanical watch movements. With their sizes around a few millimeters, these components are traditionally assembled by skilled labors with specially designed tools and fixtures. In order to liberate the labors from the tedious work, we designed and built a vision based micro assembly system. The system consists of a XY table driven by linear motors, a Z axis driven by a servomotor, a computer vision system, a set of grippers, and an industrial PC. The control software is written in C++. The accuracy of the system is about 2 μm and the cycle time is about 20 seconds depending on the assembly tasks. The paper presents the system in details. Two practical examples are included.
{"title":"A vision based micro-assembly system for assembling components in mechanical watch movements","authors":"Qifeng Qi, R. Du","doi":"10.1109/ISOT.2010.5687332","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687332","url":null,"abstract":"This paper presents a vision based micro-assembly system, which is designed to assemble various components in mechanical watch movements. With their sizes around a few millimeters, these components are traditionally assembled by skilled labors with specially designed tools and fixtures. In order to liberate the labors from the tedious work, we designed and built a vision based micro assembly system. The system consists of a XY table driven by linear motors, a Z axis driven by a servomotor, a computer vision system, a set of grippers, and an industrial PC. The control software is written in C++. The accuracy of the system is about 2 μm and the cycle time is about 20 seconds depending on the assembly tasks. The paper presents the system in details. Two practical examples are included.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"134 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74701615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687328
M. Hadinia, R. Jafari
This paper presents an approach based on combination of Element Free Galerkin (EFG) method and Finite Element (FE) method in Diffuse Optical Tomography (DOT) forward problem. DOT is a non-invasive imaging modality for visualizing and continuously monitoring tissue and blood oxygenation levels in brain and breast. The image reconstruction algorithm in DOT involves generating images by means of forward modeling results and the boundary measurements. The ability of the forward model to generate the corresponding data efficiently has a sign ificant role in DOT image reconstruction. FE technique using a fixed mesh is one of the most typical techniques for solving the diffusion equation in the DOT forward problem. However, in some medical applications, meshing task is difficult and the shape and size of elements make a further approximation in the forward problem. Mesh free Galerkin approach is also utilized in DO T, but imposing essential boundary conditions is difficult. In this paper, an approach based on combination of the two methods is used. The validity of the proposed method is investigated by simulation results.
{"title":"A hybrid EFG-FE analysis for DOT forward problem","authors":"M. Hadinia, R. Jafari","doi":"10.1109/ISOT.2010.5687328","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687328","url":null,"abstract":"This paper presents an approach based on combination of Element Free Galerkin (EFG) method and Finite Element (FE) method in Diffuse Optical Tomography (DOT) forward problem. DOT is a non-invasive imaging modality for visualizing and continuously monitoring tissue and blood oxygenation levels in brain and breast. The image reconstruction algorithm in DOT involves generating images by means of forward modeling results and the boundary measurements. The ability of the forward model to generate the corresponding data efficiently has a sign ificant role in DOT image reconstruction. FE technique using a fixed mesh is one of the most typical techniques for solving the diffusion equation in the DOT forward problem. However, in some medical applications, meshing task is difficult and the shape and size of elements make a further approximation in the forward problem. Mesh free Galerkin approach is also utilized in DO T, but imposing essential boundary conditions is difficult. In this paper, an approach based on combination of the two methods is used. The validity of the proposed method is investigated by simulation results.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"27 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85681270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687371
A. Rostami, H. Sattari, F. Janabi-Sharifi
A 1×4 ultracompact AWG, based on Si-nanowires is proposed. In this configuration waveguides with couple of spirals are used instead of conventional AWG systems. Arrangement of spirals is in such a way that, not only makes it possible to reach a large lightpath in a small area, but also there is more freedom in size adjustment. Presented AWG has 4 channels with channel spacing of 0.2 nm and total size of less than 250×290 µm2.
{"title":"Proposal for 1×4 ultracompact arrayed waveguide grating based on Si-nanowire spirals","authors":"A. Rostami, H. Sattari, F. Janabi-Sharifi","doi":"10.1109/ISOT.2010.5687371","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687371","url":null,"abstract":"A 1×4 ultracompact AWG, based on Si-nanowires is proposed. In this configuration waveguides with couple of spirals are used instead of conventional AWG systems. Arrangement of spirals is in such a way that, not only makes it possible to reach a large lightpath in a small area, but also there is more freedom in size adjustment. Presented AWG has 4 channels with channel spacing of 0.2 nm and total size of less than 250×290 µm2.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"160 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86735400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687394
Lianxiang Yang, Yonghong Wang, R. Lu
Measuring deformation and strain in materials and structures provides important information for designing and dim ensioning products as well as providing a scientific basis for optimization, quality control and assurance. Digital Speckle Pattern Interferometry (DSPI) and Digital Image Correlation (DIC) are two typical whole-field, non-contact experimental tech niques that allow rapid and highly accurate measurement of 3D-deformation and strain distributions with high resolution. The former can measure small deformation (in nanometric level) and can thus determine small strain (in micro strain level), the latter can measure relatively large deformation (micrometer and larger) and can thus determine large strain (from hundreds of micro-strain to considerable value). The combination of these two techniques covers from small to large ranges for whole field, non-contacting deformation and strain measurement, e.g. from nanometric level to a few millimeters or larger for deformation measurement and from micro strain to a few percents or larger for strain measurement. This paper reviews ESPI and DIC and their applications. Both potentials and limitation are listed. The challenges of these two techniques for real world applications are presented and analyzed. The novel developments and optimizations for practical application are presented or demonstrated
{"title":"Advanced optical methods for whole field displacement and strain measurement","authors":"Lianxiang Yang, Yonghong Wang, R. Lu","doi":"10.1109/ISOT.2010.5687394","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687394","url":null,"abstract":"Measuring deformation and strain in materials and structures provides important information for designing and dim ensioning products as well as providing a scientific basis for optimization, quality control and assurance. Digital Speckle Pattern Interferometry (DSPI) and Digital Image Correlation (DIC) are two typical whole-field, non-contact experimental tech niques that allow rapid and highly accurate measurement of 3D-deformation and strain distributions with high resolution. The former can measure small deformation (in nanometric level) and can thus determine small strain (in micro strain level), the latter can measure relatively large deformation (micrometer and larger) and can thus determine large strain (from hundreds of micro-strain to considerable value). The combination of these two techniques covers from small to large ranges for whole field, non-contacting deformation and strain measurement, e.g. from nanometric level to a few millimeters or larger for deformation measurement and from micro strain to a few percents or larger for strain measurement. This paper reviews ESPI and DIC and their applications. Both potentials and limitation are listed. The challenges of these two techniques for real world applications are presented and analyzed. The novel developments and optimizations for practical application are presented or demonstrated","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82554714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687310
Peter Scheer, A. Alhalabi, I. Mantegh
This paper presents a method to model and reproduce cyclic trajectories captured from human demonstrations. Heuristic algorithms are used to determine the general type of pattern, its parameters, and its kinematic profile. The pattern is described independently of the shape of the surface on which it is demonstrated. Key pattern points are identified based on changes in direction and velocity, and are then reduced based on their proximity. The results of the analysis are provided are used inside a task planning algorithm, to produce robot trajectories based on the workpiece geometries. The trajectory is output in the form of robot native language code so that it can be readily downloaded on the robot.
{"title":"Robot task planning and trajectory learning based on programming by demonstration","authors":"Peter Scheer, A. Alhalabi, I. Mantegh","doi":"10.1109/ISOT.2010.5687310","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687310","url":null,"abstract":"This paper presents a method to model and reproduce cyclic trajectories captured from human demonstrations. Heuristic algorithms are used to determine the general type of pattern, its parameters, and its kinematic profile. The pattern is described independently of the shape of the surface on which it is demonstrated. Key pattern points are identified based on changes in direction and velocity, and are then reduced based on their proximity. The results of the analysis are provided are used inside a task planning algorithm, to produce robot trajectories based on the workpiece geometries. The trajectory is output in the form of robot native language code so that it can be readily downloaded on the robot.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"299 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73582485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687313
Mohammad Goli, A. Ghanbari, F. Janabi-Sharifi, Ghader Karimian Khosroshahi
Camera full pose estimation using only a monocular camera model is an important topic in the field of visual servoing. In this paper a simple adaptive method for updating the weights of particle filter is proposed. Using this method, the efficiency of particle filter in estimating the full pose of camera is improved. Results of the proposed method are compared with those of generic particle filter (PF) and EKF under the same condition through an intensive computer simulation.
{"title":"Adaptive particle filter based pose estimation using a monocular camera model","authors":"Mohammad Goli, A. Ghanbari, F. Janabi-Sharifi, Ghader Karimian Khosroshahi","doi":"10.1109/ISOT.2010.5687313","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687313","url":null,"abstract":"Camera full pose estimation using only a monocular camera model is an important topic in the field of visual servoing. In this paper a simple adaptive method for updating the weights of particle filter is proposed. Using this method, the efficiency of particle filter in estimating the full pose of camera is improved. Results of the proposed method are compared with those of generic particle filter (PF) and EKF under the same condition through an intensive computer simulation.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"4 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72751450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687331
K. K. Sareen, G. Knopf, R. Canas
Range scanning of building interiors generates very large, partially spurious and unstructured point cloud data. Accurate information extraction from such data sets is a complex task due to the presence of multiple objects, diversity of their shapes, large disparity in the feature sizes, and the spatial uncertainty due to occluded regions. A fast segmentation of such data is necessary for quick understanding of the scanned scene. Unfortunately, traditional range segmentation methodologies are computationally expensive because they rely almost exclusively on shape parameters (normal, curvature) and are highly sensitive to small geometric distortions in the captured data. This paper introduces a quick and effective segmentation technique for large volumes of colorized range scans from unknown building interiors and labelling clusters of points that represent distinct surfaces and objects in the scene. Rather than computing geometric parameters, the proposed technique uses a robust Hue, Saturation and Value (HSV) color model as an effective means of id entifying rough clusters (objects) that are further refined by eliminating spurious and outlier points through region growth an d a fixed distance neighbors (FDNs) analysis. The results demonstrate that the proposed method is effective in identifying continuous clusters and can extract meaningful object clusters, even from geometrically similar regions.
{"title":"Rapid clustering of colorized 3D point cloud data for reconstructing building interiors","authors":"K. K. Sareen, G. Knopf, R. Canas","doi":"10.1109/ISOT.2010.5687331","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687331","url":null,"abstract":"Range scanning of building interiors generates very large, partially spurious and unstructured point cloud data. Accurate information extraction from such data sets is a complex task due to the presence of multiple objects, diversity of their shapes, large disparity in the feature sizes, and the spatial uncertainty due to occluded regions. A fast segmentation of such data is necessary for quick understanding of the scanned scene. Unfortunately, traditional range segmentation methodologies are computationally expensive because they rely almost exclusively on shape parameters (normal, curvature) and are highly sensitive to small geometric distortions in the captured data. This paper introduces a quick and effective segmentation technique for large volumes of colorized range scans from unknown building interiors and labelling clusters of points that represent distinct surfaces and objects in the scene. Rather than computing geometric parameters, the proposed technique uses a robust Hue, Saturation and Value (HSV) color model as an effective means of id entifying rough clusters (objects) that are further refined by eliminating spurious and outlier points through region growth an d a fixed distance neighbors (FDNs) analysis. The results demonstrate that the proposed method is effective in identifying continuous clusters and can extract meaningful object clusters, even from geometrically similar regions.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"23 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79320486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687321
D. Hong, Hyungsuck Cho
In this paper, a depth-of-field extension method is in troduced. The extension is realized by the variable annular aperture method previously proposed by the authors and focal plane oscillation method. By combining those methods, we see a synergetic effect that the depth-of-field is more extended than when each method is applied independently. The variable aperture and the focal plane oscillation are realized by a liquid crystal spatial light modulator and a deformable mirror, respectively. Simulation and experimental results are shown to verify the proposed method.
{"title":"Depth-of-field extension through focal plane oscillation and variable annular pupil","authors":"D. Hong, Hyungsuck Cho","doi":"10.1109/ISOT.2010.5687321","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687321","url":null,"abstract":"In this paper, a depth-of-field extension method is in troduced. The extension is realized by the variable annular aperture method previously proposed by the authors and focal plane oscillation method. By combining those methods, we see a synergetic effect that the depth-of-field is more extended than when each method is applied independently. The variable aperture and the focal plane oscillation are realized by a liquid crystal spatial light modulator and a deformable mirror, respectively. Simulation and experimental results are shown to verify the proposed method.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78996031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687342
Wei-Chih Wang, C. Tsui
A micro-display system based on optical fiber driven by a 2-D piezoelectric actuator is presented. An optical fiber is extended from the free end of a 2-D piezoelectric actuator (free length: 4.3mm). A field programmable gate array (FPGA) drives the actuator using a triangular waveform to deflect the optical fiber in orthogonal directions. The FPGA also controls a LED light source and the light is coupled into a chemically tapered SMF-28 fiber (core diameter: 10µm). At a pre-set pairing of ne ar-resonance frequencies (Horizontal: 22Hz; Vertical:5070Hz), the deflected optical fiber in combination with controlled light pr oduces an output image with an approximate dimension of 0.3×0.3 mm2, and a potential resolution of 400 lines per scan. This pa per details the design and fabrication of the micro-display system. The mechanical and optical design for the microresonating scanner will be discussed. In addition, the mechanical and optical performance and the resulting output of the 2-D scanner will be presented.
{"title":"2-D Mechanically resonating fiberoptic scanning display system","authors":"Wei-Chih Wang, C. Tsui","doi":"10.1109/ISOT.2010.5687342","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687342","url":null,"abstract":"A micro-display system based on optical fiber driven by a 2-D piezoelectric actuator is presented. An optical fiber is extended from the free end of a 2-D piezoelectric actuator (free length: 4.3mm). A field programmable gate array (FPGA) drives the actuator using a triangular waveform to deflect the optical fiber in orthogonal directions. The FPGA also controls a LED light source and the light is coupled into a chemically tapered SMF-28 fiber (core diameter: 10µm). At a pre-set pairing of ne ar-resonance frequencies (Horizontal: 22Hz; Vertical:5070Hz), the deflected optical fiber in combination with controlled light pr oduces an output image with an approximate dimension of 0.3×0.3 mm2, and a potential resolution of 400 lines per scan. This pa per details the design and fabrication of the micro-display system. The mechanical and optical design for the microresonating scanner will be discussed. In addition, the mechanical and optical performance and the resulting output of the 2-D scanner will be presented.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"47 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82310746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/ISOT.2010.5687336
M. Hashimoto, T. Fujiwara, H. Koshimizu, H. Okuda, K. Sumi
We propose a high-speed template matching method using small number of pixels that represent statistical subset of an original template image. Generally, to reduce the number of template pixels means low computational cost of matching. However, high-speed and high-reliability often have trade-off relation in actual situations. In order to realize reliable matching, it is important to extract few pixels that have unique characteristics about their location and intensity. For this purpose, analysis of co-occurrence histogram for local combination of multiple pixels is useful, because it provides beneficial information about simultaneous occurrence probability. In the proposed method, pixels with low co-occurrence probability are preferentially extracted as significant template pixels used for matching process. Also we propose a method to approximate n-pixels co-occurrence probability using some two-dimensional co-occurrence histograms to save memory space. Through some experiments using more than 480 test images, it has been proved that approximately 0.2 to 1% of template pixels extracted by proposed method can achieve practical performance. The recognition success rate is 96.6%, and the processing time is 15msec (by Core 2 Duo 3.16GHz).
我们提出了一种高速模板匹配方法,使用代表原始模板图像的统计子集的少量像素。通常,减少模板像素的数量意味着降低匹配的计算成本。然而,在实际应用中,高速与高可靠性往往存在权衡关系。为了实现可靠的匹配,重要的是提取少量具有独特的位置和强度特征的像素。为此,分析多像素局部组合的共现直方图是有用的,因为它提供了有关同时发生概率的有益信息。在该方法中,优先提取共现概率较低的像素作为重要模板像素用于匹配处理。为了节省内存空间,我们还提出了一种利用二维共现直方图近似n像素共现概率的方法。通过对480多张测试图像的实验证明,该方法提取的模板像素约为0.2 ~ 1%,可以达到实际性能。识别成功率为96.6%,处理时间为15msec (Core 2 Duo 3.16GHz)。
{"title":"Extraction of unique pixels based on co-occurrence probability for high-speed template matching","authors":"M. Hashimoto, T. Fujiwara, H. Koshimizu, H. Okuda, K. Sumi","doi":"10.1109/ISOT.2010.5687336","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687336","url":null,"abstract":"We propose a high-speed template matching method using small number of pixels that represent statistical subset of an original template image. Generally, to reduce the number of template pixels means low computational cost of matching. However, high-speed and high-reliability often have trade-off relation in actual situations. In order to realize reliable matching, it is important to extract few pixels that have unique characteristics about their location and intensity. For this purpose, analysis of co-occurrence histogram for local combination of multiple pixels is useful, because it provides beneficial information about simultaneous occurrence probability. In the proposed method, pixels with low co-occurrence probability are preferentially extracted as significant template pixels used for matching process. Also we propose a method to approximate n-pixels co-occurrence probability using some two-dimensional co-occurrence histograms to save memory space. Through some experiments using more than 480 test images, it has been proved that approximately 0.2 to 1% of template pixels extracted by proposed method can achieve practical performance. The recognition success rate is 96.6%, and the processing time is 15msec (by Core 2 Duo 3.16GHz).","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"99 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81027556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}