首页 > 最新文献

Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)最新文献

英文 中文
A vision based micro-assembly system for assembling components in mechanical watch movements 一种用于机械表机芯部件装配的基于视觉的微装配系统
Qifeng Qi, R. Du
This paper presents a vision based micro-assembly system, which is designed to assemble various components in mechanical watch movements. With their sizes around a few millimeters, these components are traditionally assembled by skilled labors with specially designed tools and fixtures. In order to liberate the labors from the tedious work, we designed and built a vision based micro assembly system. The system consists of a XY table driven by linear motors, a Z axis driven by a servomotor, a computer vision system, a set of grippers, and an industrial PC. The control software is written in C++. The accuracy of the system is about 2 μm and the cycle time is about 20 seconds depending on the assembly tasks. The paper presents the system in details. Two practical examples are included.
本文提出了一种基于视觉的机械表机芯微装配系统,用于机械表机芯中各种部件的装配。这些部件的尺寸约为几毫米,传统上由熟练的工人用专门设计的工具和夹具组装。为了将人工从繁琐的工作中解放出来,我们设计并构建了一个基于视觉的微装配系统。该系统由直线电机驱动的XY工作台、伺服电机驱动的Z轴、计算机视觉系统、一组夹持器和一台工业PC组成。控制软件采用c++语言编写。根据不同的装配任务,系统的精度约为2 μm,周期时间约为20秒。本文详细介绍了该系统。包括两个实际的例子。
{"title":"A vision based micro-assembly system for assembling components in mechanical watch movements","authors":"Qifeng Qi, R. Du","doi":"10.1109/ISOT.2010.5687332","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687332","url":null,"abstract":"This paper presents a vision based micro-assembly system, which is designed to assemble various components in mechanical watch movements. With their sizes around a few millimeters, these components are traditionally assembled by skilled labors with specially designed tools and fixtures. In order to liberate the labors from the tedious work, we designed and built a vision based micro assembly system. The system consists of a XY table driven by linear motors, a Z axis driven by a servomotor, a computer vision system, a set of grippers, and an industrial PC. The control software is written in C++. The accuracy of the system is about 2 μm and the cycle time is about 20 seconds depending on the assembly tasks. The paper presents the system in details. Two practical examples are included.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"134 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74701615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A hybrid EFG-FE analysis for DOT forward problem DOT正演问题的EFG-FE混合分析
M. Hadinia, R. Jafari
This paper presents an approach based on combination of Element Free Galerkin (EFG) method and Finite Element (FE) method in Diffuse Optical Tomography (DOT) forward problem. DOT is a non-invasive imaging modality for visualizing and continuously monitoring tissue and blood oxygenation levels in brain and breast. The image reconstruction algorithm in DOT involves generating images by means of forward modeling results and the boundary measurements. The ability of the forward model to generate the corresponding data efficiently has a sign ificant role in DOT image reconstruction. FE technique using a fixed mesh is one of the most typical techniques for solving the diffusion equation in the DOT forward problem. However, in some medical applications, meshing task is difficult and the shape and size of elements make a further approximation in the forward problem. Mesh free Galerkin approach is also utilized in DO T, but imposing essential boundary conditions is difficult. In this paper, an approach based on combination of the two methods is used. The validity of the proposed method is investigated by simulation results.
提出了一种基于无单元伽辽金法(EFG)和有限元法(FE)相结合的漫射光学层析成像(DOT)正演问题求解方法。DOT是一种非侵入性成像方式,用于可视化和连续监测脑和乳腺组织和血液氧合水平。DOT图像重建算法包括利用正演模拟结果和边界测量结果生成图像。正演模型高效生成相应数据的能力在DOT图像重建中具有重要作用。固定网格有限元技术是求解DOT正演问题中扩散方程最典型的技术之一。然而,在一些医学应用中,网格划分任务是困难的,在正演问题中,单元的形状和尺寸进一步逼近。无网格伽辽金方法也被用于DO T,但施加必要的边界条件是困难的。本文采用了两种方法相结合的方法。仿真结果验证了该方法的有效性。
{"title":"A hybrid EFG-FE analysis for DOT forward problem","authors":"M. Hadinia, R. Jafari","doi":"10.1109/ISOT.2010.5687328","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687328","url":null,"abstract":"This paper presents an approach based on combination of Element Free Galerkin (EFG) method and Finite Element (FE) method in Diffuse Optical Tomography (DOT) forward problem. DOT is a non-invasive imaging modality for visualizing and continuously monitoring tissue and blood oxygenation levels in brain and breast. The image reconstruction algorithm in DOT involves generating images by means of forward modeling results and the boundary measurements. The ability of the forward model to generate the corresponding data efficiently has a sign ificant role in DOT image reconstruction. FE technique using a fixed mesh is one of the most typical techniques for solving the diffusion equation in the DOT forward problem. However, in some medical applications, meshing task is difficult and the shape and size of elements make a further approximation in the forward problem. Mesh free Galerkin approach is also utilized in DO T, but imposing essential boundary conditions is difficult. In this paper, an approach based on combination of the two methods is used. The validity of the proposed method is investigated by simulation results.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"27 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85681270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposal for 1×4 ultracompact arrayed waveguide grating based on Si-nanowire spirals 基于硅纳米线螺旋的1×4超紧凑阵列波导光栅的设计
A. Rostami, H. Sattari, F. Janabi-Sharifi
A 1×4 ultracompact AWG, based on Si-nanowires is proposed. In this configuration waveguides with couple of spirals are used instead of conventional AWG systems. Arrangement of spirals is in such a way that, not only makes it possible to reach a large lightpath in a small area, but also there is more freedom in size adjustment. Presented AWG has 4 channels with channel spacing of 0.2 nm and total size of less than 250×290 µm2.
提出了一种基于硅纳米线的1×4超紧凑AWG。在这种配置中,使用一对螺旋波导代替传统的AWG系统。螺旋的排列方式不仅可以在小的区域内达到大的光路,而且在尺寸调整上也有更多的自由。所制备的AWG具有4个通道,通道间距为0.2 nm,总尺寸小于250×290µm2。
{"title":"Proposal for 1×4 ultracompact arrayed waveguide grating based on Si-nanowire spirals","authors":"A. Rostami, H. Sattari, F. Janabi-Sharifi","doi":"10.1109/ISOT.2010.5687371","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687371","url":null,"abstract":"A 1×4 ultracompact AWG, based on Si-nanowires is proposed. In this configuration waveguides with couple of spirals are used instead of conventional AWG systems. Arrangement of spirals is in such a way that, not only makes it possible to reach a large lightpath in a small area, but also there is more freedom in size adjustment. Presented AWG has 4 channels with channel spacing of 0.2 nm and total size of less than 250×290 µm2.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"160 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86735400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced optical methods for whole field displacement and strain measurement 全场位移和应变测量的先进光学方法
Lianxiang Yang, Yonghong Wang, R. Lu
Measuring deformation and strain in materials and structures provides important information for designing and dim ensioning products as well as providing a scientific basis for optimization, quality control and assurance. Digital Speckle Pattern Interferometry (DSPI) and Digital Image Correlation (DIC) are two typical whole-field, non-contact experimental tech niques that allow rapid and highly accurate measurement of 3D-deformation and strain distributions with high resolution. The former can measure small deformation (in nanometric level) and can thus determine small strain (in micro strain level), the latter can measure relatively large deformation (micrometer and larger) and can thus determine large strain (from hundreds of micro-strain to considerable value). The combination of these two techniques covers from small to large ranges for whole field, non-contacting deformation and strain measurement, e.g. from nanometric level to a few millimeters or larger for deformation measurement and from micro strain to a few percents or larger for strain measurement. This paper reviews ESPI and DIC and their applications. Both potentials and limitation are listed. The challenges of these two techniques for real world applications are presented and analyzed. The novel developments and optimizations for practical application are presented or demonstrated
测量材料和结构的变形和应变为产品的设计和优化提供了重要信息,为优化、质量控制和保证提供了科学依据。数字散斑干涉(DSPI)和数字图像相关(DIC)是两种典型的全场、非接触式实验技术,可以快速、高精度地测量高分辨率的3d变形和应变分布。前者可以测量较小的变形(纳米级),从而可以确定较小的应变(微应变级),后者可以测量较大的变形(微米及更大),从而可以确定较大的应变(从数百个微应变到相当大的值)。这两种技术的结合涵盖了从小范围到大范围的全场、非接触式变形和应变测量,例如,从纳米级到几毫米或更大的变形测量,从微应变到几个百分点或更大的应变测量。本文综述了ESPI和DIC及其应用。并列举了潜力和局限性。提出并分析了这两种技术在实际应用中的挑战。介绍或演示了实际应用的新发展和优化
{"title":"Advanced optical methods for whole field displacement and strain measurement","authors":"Lianxiang Yang, Yonghong Wang, R. Lu","doi":"10.1109/ISOT.2010.5687394","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687394","url":null,"abstract":"Measuring deformation and strain in materials and structures provides important information for designing and dim ensioning products as well as providing a scientific basis for optimization, quality control and assurance. Digital Speckle Pattern Interferometry (DSPI) and Digital Image Correlation (DIC) are two typical whole-field, non-contact experimental tech niques that allow rapid and highly accurate measurement of 3D-deformation and strain distributions with high resolution. The former can measure small deformation (in nanometric level) and can thus determine small strain (in micro strain level), the latter can measure relatively large deformation (micrometer and larger) and can thus determine large strain (from hundreds of micro-strain to considerable value). The combination of these two techniques covers from small to large ranges for whole field, non-contacting deformation and strain measurement, e.g. from nanometric level to a few millimeters or larger for deformation measurement and from micro strain to a few percents or larger for strain measurement. This paper reviews ESPI and DIC and their applications. Both potentials and limitation are listed. The challenges of these two techniques for real world applications are presented and analyzed. The novel developments and optimizations for practical application are presented or demonstrated","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82554714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Robot task planning and trajectory learning based on programming by demonstration 基于演示编程的机器人任务规划与轨迹学习
Peter Scheer, A. Alhalabi, I. Mantegh
This paper presents a method to model and reproduce cyclic trajectories captured from human demonstrations. Heuristic algorithms are used to determine the general type of pattern, its parameters, and its kinematic profile. The pattern is described independently of the shape of the surface on which it is demonstrated. Key pattern points are identified based on changes in direction and velocity, and are then reduced based on their proximity. The results of the analysis are provided are used inside a task planning algorithm, to produce robot trajectories based on the workpiece geometries. The trajectory is output in the form of robot native language code so that it can be readily downloaded on the robot.
本文提出了一种方法来模拟和重现从人类演示中捕获的循环轨迹。启发式算法用于确定图案的一般类型、参数和运动轮廓。图案的描述独立于其所展示的表面的形状。关键模式点是根据方向和速度的变化来识别的,然后根据它们的接近程度来减少。所提供的分析结果被用于任务规划算法中,以产生基于工件几何形状的机器人轨迹。轨迹以机器人本地语言代码的形式输出,以便机器人可以轻松下载。
{"title":"Robot task planning and trajectory learning based on programming by demonstration","authors":"Peter Scheer, A. Alhalabi, I. Mantegh","doi":"10.1109/ISOT.2010.5687310","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687310","url":null,"abstract":"This paper presents a method to model and reproduce cyclic trajectories captured from human demonstrations. Heuristic algorithms are used to determine the general type of pattern, its parameters, and its kinematic profile. The pattern is described independently of the shape of the surface on which it is demonstrated. Key pattern points are identified based on changes in direction and velocity, and are then reduced based on their proximity. The results of the analysis are provided are used inside a task planning algorithm, to produce robot trajectories based on the workpiece geometries. The trajectory is output in the form of robot native language code so that it can be readily downloaded on the robot.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"299 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73582485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive particle filter based pose estimation using a monocular camera model 基于自适应粒子滤波的单目摄像机姿态估计模型
Mohammad Goli, A. Ghanbari, F. Janabi-Sharifi, Ghader Karimian Khosroshahi
Camera full pose estimation using only a monocular camera model is an important topic in the field of visual servoing. In this paper a simple adaptive method for updating the weights of particle filter is proposed. Using this method, the efficiency of particle filter in estimating the full pose of camera is improved. Results of the proposed method are compared with those of generic particle filter (PF) and EKF under the same condition through an intensive computer simulation.
单目摄像机模型下的摄像机全姿态估计是视觉伺服领域的一个重要研究课题。本文提出了一种简单的自适应更新粒子滤波器权值的方法。利用该方法,提高了粒子滤波估计相机全位姿的效率。通过计算机仿真,将所提方法与通用粒子滤波(PF)和EKF在相同条件下的结果进行了比较。
{"title":"Adaptive particle filter based pose estimation using a monocular camera model","authors":"Mohammad Goli, A. Ghanbari, F. Janabi-Sharifi, Ghader Karimian Khosroshahi","doi":"10.1109/ISOT.2010.5687313","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687313","url":null,"abstract":"Camera full pose estimation using only a monocular camera model is an important topic in the field of visual servoing. In this paper a simple adaptive method for updating the weights of particle filter is proposed. Using this method, the efficiency of particle filter in estimating the full pose of camera is improved. Results of the proposed method are compared with those of generic particle filter (PF) and EKF under the same condition through an intensive computer simulation.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"4 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72751450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rapid clustering of colorized 3D point cloud data for reconstructing building interiors 彩色三维点云数据快速聚类重建建筑内部
K. K. Sareen, G. Knopf, R. Canas
Range scanning of building interiors generates very large, partially spurious and unstructured point cloud data. Accurate information extraction from such data sets is a complex task due to the presence of multiple objects, diversity of their shapes, large disparity in the feature sizes, and the spatial uncertainty due to occluded regions. A fast segmentation of such data is necessary for quick understanding of the scanned scene. Unfortunately, traditional range segmentation methodologies are computationally expensive because they rely almost exclusively on shape parameters (normal, curvature) and are highly sensitive to small geometric distortions in the captured data. This paper introduces a quick and effective segmentation technique for large volumes of colorized range scans from unknown building interiors and labelling clusters of points that represent distinct surfaces and objects in the scene. Rather than computing geometric parameters, the proposed technique uses a robust Hue, Saturation and Value (HSV) color model as an effective means of id entifying rough clusters (objects) that are further refined by eliminating spurious and outlier points through region growth an d a fixed distance neighbors (FDNs) analysis. The results demonstrate that the proposed method is effective in identifying continuous clusters and can extract meaningful object clusters, even from geometrically similar regions.
建筑物内部的范围扫描产生非常大的,部分虚假和非结构化的点云数据。由于这些数据集中存在多个目标,其形状多样,特征尺寸差异较大,并且由于遮挡区域的空间不确定性,因此从这些数据集中准确提取信息是一项复杂的任务。快速分割这些数据对于快速理解扫描场景是必要的。不幸的是,传统的距离分割方法计算成本很高,因为它们几乎完全依赖于形状参数(法线,曲率),并且对捕获数据中的小几何畸变高度敏感。本文介绍了一种快速有效的分割技术,用于从未知建筑物内部进行大量彩色范围扫描,并标记代表场景中不同表面和物体的点簇。该技术不需要计算几何参数,而是使用稳健的色相、饱和度和值(HSV)颜色模型作为识别粗糙聚类(对象)的有效手段,通过区域增长和固定距离邻居(fdn)分析消除虚假和异常点,进一步细化粗糙聚类(对象)。结果表明,该方法可以有效地识别连续的聚类,并且可以从几何相似的区域中提取有意义的目标聚类。
{"title":"Rapid clustering of colorized 3D point cloud data for reconstructing building interiors","authors":"K. K. Sareen, G. Knopf, R. Canas","doi":"10.1109/ISOT.2010.5687331","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687331","url":null,"abstract":"Range scanning of building interiors generates very large, partially spurious and unstructured point cloud data. Accurate information extraction from such data sets is a complex task due to the presence of multiple objects, diversity of their shapes, large disparity in the feature sizes, and the spatial uncertainty due to occluded regions. A fast segmentation of such data is necessary for quick understanding of the scanned scene. Unfortunately, traditional range segmentation methodologies are computationally expensive because they rely almost exclusively on shape parameters (normal, curvature) and are highly sensitive to small geometric distortions in the captured data. This paper introduces a quick and effective segmentation technique for large volumes of colorized range scans from unknown building interiors and labelling clusters of points that represent distinct surfaces and objects in the scene. Rather than computing geometric parameters, the proposed technique uses a robust Hue, Saturation and Value (HSV) color model as an effective means of id entifying rough clusters (objects) that are further refined by eliminating spurious and outlier points through region growth an d a fixed distance neighbors (FDNs) analysis. The results demonstrate that the proposed method is effective in identifying continuous clusters and can extract meaningful object clusters, even from geometrically similar regions.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"23 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79320486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-of-field extension through focal plane oscillation and variable annular pupil 通过焦平面振荡和可变环形瞳孔的景深扩展
D. Hong, Hyungsuck Cho
In this paper, a depth-of-field extension method is in troduced. The extension is realized by the variable annular aperture method previously proposed by the authors and focal plane oscillation method. By combining those methods, we see a synergetic effect that the depth-of-field is more extended than when each method is applied independently. The variable aperture and the focal plane oscillation are realized by a liquid crystal spatial light modulator and a deformable mirror, respectively. Simulation and experimental results are shown to verify the proposed method.
本文介绍了一种景深扩展方法。通过作者提出的变环形孔径法和焦平面振荡法实现了扩展。通过结合这些方法,我们看到了一个协同效应,即景深比每个方法单独应用时更广泛。可变孔径和焦平面振荡分别由液晶空间光调制器和变形镜实现。仿真和实验结果验证了该方法的有效性。
{"title":"Depth-of-field extension through focal plane oscillation and variable annular pupil","authors":"D. Hong, Hyungsuck Cho","doi":"10.1109/ISOT.2010.5687321","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687321","url":null,"abstract":"In this paper, a depth-of-field extension method is in troduced. The extension is realized by the variable annular aperture method previously proposed by the authors and focal plane oscillation method. By combining those methods, we see a synergetic effect that the depth-of-field is more extended than when each method is applied independently. The variable aperture and the focal plane oscillation are realized by a liquid crystal spatial light modulator and a deformable mirror, respectively. Simulation and experimental results are shown to verify the proposed method.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78996031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2-D Mechanically resonating fiberoptic scanning display system 二维机械谐振光纤扫描显示系统
Wei-Chih Wang, C. Tsui
A micro-display system based on optical fiber driven by a 2-D piezoelectric actuator is presented. An optical fiber is extended from the free end of a 2-D piezoelectric actuator (free length: 4.3mm). A field programmable gate array (FPGA) drives the actuator using a triangular waveform to deflect the optical fiber in orthogonal directions. The FPGA also controls a LED light source and the light is coupled into a chemically tapered SMF-28 fiber (core diameter: 10µm). At a pre-set pairing of ne ar-resonance frequencies (Horizontal: 22Hz; Vertical:5070Hz), the deflected optical fiber in combination with controlled light pr oduces an output image with an approximate dimension of 0.3×0.3 mm2, and a potential resolution of 400 lines per scan. This pa per details the design and fabrication of the micro-display system. The mechanical and optical design for the microresonating scanner will be discussed. In addition, the mechanical and optical performance and the resulting output of the 2-D scanner will be presented.
提出了一种由二维压电驱动器驱动的光纤微显示系统。光纤从二维压电驱动器的自由端延伸(自由长度:4.3mm)。现场可编程门阵列(FPGA)使用三角形波形驱动致动器,使光纤在正交方向上偏转。FPGA还控制LED光源,该光源耦合到化学变细的SMF-28光纤(芯径:10 μ m)中。在预先设定的新共振频率配对(水平:22Hz;垂直:5070Hz),偏转光纤与受控光pr相结合,产生尺寸约为0.3×0.3 mm2的输出图像,每次扫描的潜在分辨率为400行。本文详细介绍了微显示系统的设计与制作。讨论了微谐振扫描仪的机械和光学设计。此外,机械和光学性能和由此产生的2-D扫描仪的输出将被提出。
{"title":"2-D Mechanically resonating fiberoptic scanning display system","authors":"Wei-Chih Wang, C. Tsui","doi":"10.1109/ISOT.2010.5687342","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687342","url":null,"abstract":"A micro-display system based on optical fiber driven by a 2-D piezoelectric actuator is presented. An optical fiber is extended from the free end of a 2-D piezoelectric actuator (free length: 4.3mm). A field programmable gate array (FPGA) drives the actuator using a triangular waveform to deflect the optical fiber in orthogonal directions. The FPGA also controls a LED light source and the light is coupled into a chemically tapered SMF-28 fiber (core diameter: 10µm). At a pre-set pairing of ne ar-resonance frequencies (Horizontal: 22Hz; Vertical:5070Hz), the deflected optical fiber in combination with controlled light pr oduces an output image with an approximate dimension of 0.3×0.3 mm2, and a potential resolution of 400 lines per scan. This pa per details the design and fabrication of the micro-display system. The mechanical and optical design for the microresonating scanner will be discussed. In addition, the mechanical and optical performance and the resulting output of the 2-D scanner will be presented.","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"47 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82310746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction of unique pixels based on co-occurrence probability for high-speed template matching 高速模板匹配中基于共现概率的唯一像素提取
M. Hashimoto, T. Fujiwara, H. Koshimizu, H. Okuda, K. Sumi
We propose a high-speed template matching method using small number of pixels that represent statistical subset of an original template image. Generally, to reduce the number of template pixels means low computational cost of matching. However, high-speed and high-reliability often have trade-off relation in actual situations. In order to realize reliable matching, it is important to extract few pixels that have unique characteristics about their location and intensity. For this purpose, analysis of co-occurrence histogram for local combination of multiple pixels is useful, because it provides beneficial information about simultaneous occurrence probability. In the proposed method, pixels with low co-occurrence probability are preferentially extracted as significant template pixels used for matching process. Also we propose a method to approximate n-pixels co-occurrence probability using some two-dimensional co-occurrence histograms to save memory space. Through some experiments using more than 480 test images, it has been proved that approximately 0.2 to 1% of template pixels extracted by proposed method can achieve practical performance. The recognition success rate is 96.6%, and the processing time is 15msec (by Core 2 Duo 3.16GHz).
我们提出了一种高速模板匹配方法,使用代表原始模板图像的统计子集的少量像素。通常,减少模板像素的数量意味着降低匹配的计算成本。然而,在实际应用中,高速与高可靠性往往存在权衡关系。为了实现可靠的匹配,重要的是提取少量具有独特的位置和强度特征的像素。为此,分析多像素局部组合的共现直方图是有用的,因为它提供了有关同时发生概率的有益信息。在该方法中,优先提取共现概率较低的像素作为重要模板像素用于匹配处理。为了节省内存空间,我们还提出了一种利用二维共现直方图近似n像素共现概率的方法。通过对480多张测试图像的实验证明,该方法提取的模板像素约为0.2 ~ 1%,可以达到实际性能。识别成功率为96.6%,处理时间为15msec (Core 2 Duo 3.16GHz)。
{"title":"Extraction of unique pixels based on co-occurrence probability for high-speed template matching","authors":"M. Hashimoto, T. Fujiwara, H. Koshimizu, H. Okuda, K. Sumi","doi":"10.1109/ISOT.2010.5687336","DOIUrl":"https://doi.org/10.1109/ISOT.2010.5687336","url":null,"abstract":"We propose a high-speed template matching method using small number of pixels that represent statistical subset of an original template image. Generally, to reduce the number of template pixels means low computational cost of matching. However, high-speed and high-reliability often have trade-off relation in actual situations. In order to realize reliable matching, it is important to extract few pixels that have unique characteristics about their location and intensity. For this purpose, analysis of co-occurrence histogram for local combination of multiple pixels is useful, because it provides beneficial information about simultaneous occurrence probability. In the proposed method, pixels with low co-occurrence probability are preferentially extracted as significant template pixels used for matching process. Also we propose a method to approximate n-pixels co-occurrence probability using some two-dimensional co-occurrence histograms to save memory space. Through some experiments using more than 480 test images, it has been proved that approximately 0.2 to 1% of template pixels extracted by proposed method can achieve practical performance. The recognition success rate is 96.6%, and the processing time is 15msec (by Core 2 Duo 3.16GHz).","PeriodicalId":91154,"journal":{"name":"Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)","volume":"99 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81027556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Optomechatronic Technologies (ISOT), 2010 International Symposium on : 25-27 Oct. 2010 : [Toronto, ON]. International Symposium on Optomechatronic Technologies (2010 : Toronto, Ont.)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1