首页 > 最新文献

[1992] Proceedings IEEE Workshop on Applications of Computer Vision最新文献

英文 中文
Scale-space clustering and classification of SAR images with numerous attributes and classes 具有众多属性和类别的SAR图像的尺度空间聚类与分类
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240325
Yiu-fai Wong, E. Posner
Describes application of scale-space clustering to the classification of a multispectral and polarimetric SAR image of an agricultural site. After polarimetric and radiometric calibration and noise cancellation, the authors extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The algorithm was able to partition without supervision a set of unlabeled vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. The algorithm can handle variabilities in cluster densities, cluster sizes and ellipsoidal shapes.<>
描述了尺度空间聚类在农业站点的多光谱和偏振SAR图像分类中的应用。在极化和辐射校正和噪声消除后,作者从散射矩阵中提取每个像素的12维特征向量。该算法能够在没有监督的情况下,从13个选定的位点(每个位点对应一个不同的作物)将一组未标记的向量划分为13个簇。然后使用聚类参数对整个图像进行分类。与分层规则得到的分类图相比,该分类图噪声小,精度高。该算法可以处理簇密度、簇大小和椭球形状的变化。
{"title":"Scale-space clustering and classification of SAR images with numerous attributes and classes","authors":"Yiu-fai Wong, E. Posner","doi":"10.1109/ACV.1992.240325","DOIUrl":"https://doi.org/10.1109/ACV.1992.240325","url":null,"abstract":"Describes application of scale-space clustering to the classification of a multispectral and polarimetric SAR image of an agricultural site. After polarimetric and radiometric calibration and noise cancellation, the authors extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The algorithm was able to partition without supervision a set of unlabeled vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. The algorithm can handle variabilities in cluster densities, cluster sizes and ellipsoidal shapes.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129487601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive map conversion: combining machine vision and human input 交互式地图转换:结合机器视觉和人工输入
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240304
F. Quek, Michael C. Petro
The authors present an interactive map conversion system which combines a human operator's high level reasoning with machine perception under the Human-Machine Perceptual Cooperation (HMPC) paradigm. HMPC defines two channels of interaction: the focus of attention (FOA) by which the user directs the attention of machine perception, and context. As the user moves the FOA across a raster map display via a pointing device, a smart cursor operates proactively on the data highlighting objects for extraction. The FOA permits foveal emphasis, enabling the user to vary motor precision with map clutter. HMPC provides for contexts at four levels of abstraction. This permits the efficiency of the system to degrade gracefully as data quality worsens. They also present a boundary-based line follower which computes line thickness, and an isolated symbol extractor based on feature-vectors.<>
在人机感知合作(HMPC)范式下,提出了一种将人类操作者的高级推理与机器感知相结合的交互式地图转换系统。HMPC定义了两种交互渠道:关注焦点(FOA),用户通过它引导机器感知的注意力,以及上下文。当用户通过指向设备在栅格地图上移动FOA时,智能光标会主动操作数据高亮显示对象以进行提取。FOA允许中央凹的重点,使用户能够改变电机精度与地图杂乱。HMPC在四个抽象层次上提供上下文。这允许系统的效率在数据质量恶化时优雅地降低。他们还提出了一种基于边界的线跟踪器,用于计算线的厚度,以及一种基于特征向量的孤立符号提取器。
{"title":"Interactive map conversion: combining machine vision and human input","authors":"F. Quek, Michael C. Petro","doi":"10.1109/ACV.1992.240304","DOIUrl":"https://doi.org/10.1109/ACV.1992.240304","url":null,"abstract":"The authors present an interactive map conversion system which combines a human operator's high level reasoning with machine perception under the Human-Machine Perceptual Cooperation (HMPC) paradigm. HMPC defines two channels of interaction: the focus of attention (FOA) by which the user directs the attention of machine perception, and context. As the user moves the FOA across a raster map display via a pointing device, a smart cursor operates proactively on the data highlighting objects for extraction. The FOA permits foveal emphasis, enabling the user to vary motor precision with map clutter. HMPC provides for contexts at four levels of abstraction. This permits the efficiency of the system to degrade gracefully as data quality worsens. They also present a boundary-based line follower which computes line thickness, and an isolated symbol extractor based on feature-vectors.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130249510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visual processing for autonomous driving 自动驾驶的视觉处理
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240315
Henry Schneiderman, M. Nashman
Describes a visual processing algorithm that supports autonomous road following. The algorithm requires that lane markings be present and attempts to track the lane markings on both lane boundaries. There are three stages of computation: extracting edges; matching extracted edge points with a geometric model of the road, and updating the geometric road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been implemented and tested using video taped road scenes. It performs robustly for both highways and rural roads. The algorithm runs at a sampling rate of 15 Hz and has a worst case latency of 139 milliseconds (ms). The algorithm is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated vision processing engine and a VME-based microprocessor system.<>
描述一种支持自动道路跟踪的视觉处理算法。该算法要求车道标记存在,并试图跟踪两个车道边界上的车道标记。计算分为三个阶段:提取边缘;将提取的边缘点与道路几何模型进行匹配,更新道路几何模型。所有的处理都局限于二维图像平面。没有使用有关车辆运动的信息。该算法已经实现并使用录像道路场景进行了测试。它在高速公路和农村道路上都表现良好。该算法以15 Hz的采样率运行,最坏情况延迟为139毫秒(ms)。该算法在NASA/NBS遥控机器人控制系统架构标准参考模型(NASREM)架构下实现,并在专用视觉处理引擎和基于vme的微处理器系统上运行。
{"title":"Visual processing for autonomous driving","authors":"Henry Schneiderman, M. Nashman","doi":"10.1109/ACV.1992.240315","DOIUrl":"https://doi.org/10.1109/ACV.1992.240315","url":null,"abstract":"Describes a visual processing algorithm that supports autonomous road following. The algorithm requires that lane markings be present and attempts to track the lane markings on both lane boundaries. There are three stages of computation: extracting edges; matching extracted edge points with a geometric model of the road, and updating the geometric road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been implemented and tested using video taped road scenes. It performs robustly for both highways and rural roads. The algorithm runs at a sampling rate of 15 Hz and has a worst case latency of 139 milliseconds (ms). The algorithm is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated vision processing engine and a VME-based microprocessor system.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130098025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Adaptive control techniques for dynamic visual repositioning of hand-eye robotic systems 手眼机器人系统动态视觉定位的自适应控制技术
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240321
N. Papanikolopoulos, P. Khosla
Using active monocular vision for 3-D visual control tasks is difficult since the translational and the rotational degrees of freedom are strongly coupled. The paper addresses several issues in 3-D visual control and presents adaptive control schemes for the problem of robotic visual servoing (eye-in-hand configuration) around a static rigid target. The objective is to move the image projections of several feature points of the static rigid target to some desired image positions. The inverse perspective transformation is assumed partially unknown. The adaptive controllers compensate for the servoing errors, the partially unknown camera parameters, and the computational delays which are introduced by the time-consuming vision algorithms. The authors present a stability analysis along with a study of the conditions that the feature points must satisfy in order for the problem to be solvable. Finally, several experimental results are presented to verify the validity and the efficacy of the proposed algorithms.<>
由于平移自由度和旋转自由度是强耦合的,利用主动单目视觉进行三维视觉控制是很困难的。本文讨论了三维视觉控制中的几个问题,提出了机器人围绕静态刚性目标的视觉伺服(眼手构型)问题的自适应控制方案。目标是将静态刚性目标的几个特征点的图像投影移动到期望的图像位置。假设透视逆变换部分未知。自适应控制器补偿了伺服误差、部分未知的摄像机参数以及由耗时的视觉算法引入的计算延迟。作者给出了稳定性分析,并研究了特征点为使问题可解所必须满足的条件。最后,给出了几个实验结果,验证了所提算法的有效性
{"title":"Adaptive control techniques for dynamic visual repositioning of hand-eye robotic systems","authors":"N. Papanikolopoulos, P. Khosla","doi":"10.1109/ACV.1992.240321","DOIUrl":"https://doi.org/10.1109/ACV.1992.240321","url":null,"abstract":"Using active monocular vision for 3-D visual control tasks is difficult since the translational and the rotational degrees of freedom are strongly coupled. The paper addresses several issues in 3-D visual control and presents adaptive control schemes for the problem of robotic visual servoing (eye-in-hand configuration) around a static rigid target. The objective is to move the image projections of several feature points of the static rigid target to some desired image positions. The inverse perspective transformation is assumed partially unknown. The adaptive controllers compensate for the servoing errors, the partially unknown camera parameters, and the computational delays which are introduced by the time-consuming vision algorithms. The authors present a stability analysis along with a study of the conditions that the feature points must satisfy in order for the problem to be solvable. Finally, several experimental results are presented to verify the validity and the efficacy of the proposed algorithms.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130268615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PROMAP-a system for analysis of topographic maps 地形图分析系统
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240328
B. Lauterbach, N. Ebi, P. Besslich
A system for automatic data acquisition from topographic maps (PROMAP-Processing of Maps) is presented. Maps are an important source of information for efficient spatial data evaluation using Geographic Information Systems (GIS). At present a lot of relevant maps have still to be digitized manually, which is a time-consuming and error-prone process. To improve the situation, the authors developed the PROMAP-system which incorporates adequate image analyzing methods. The system is capable of generating a symbolic description of the map contents that may be imported into a GIS (e.g. ARC/INFO).<>
介绍了一种地形图数据自动采集系统(PROMAP-Processing of maps)。地图是利用地理信息系统(GIS)进行有效空间数据评估的重要信息来源。目前,很多相关地图还需要手工数字化,这是一个耗时且容易出错的过程。为了改善这种情况,作者开发了promap系统,其中包含了足够的图像分析方法。该系统能够生成可导入GIS(例如ARC/INFO)的地图内容的符号描述。
{"title":"PROMAP-a system for analysis of topographic maps","authors":"B. Lauterbach, N. Ebi, P. Besslich","doi":"10.1109/ACV.1992.240328","DOIUrl":"https://doi.org/10.1109/ACV.1992.240328","url":null,"abstract":"A system for automatic data acquisition from topographic maps (PROMAP-Processing of Maps) is presented. Maps are an important source of information for efficient spatial data evaluation using Geographic Information Systems (GIS). At present a lot of relevant maps have still to be digitized manually, which is a time-consuming and error-prone process. To improve the situation, the authors developed the PROMAP-system which incorporates adequate image analyzing methods. The system is capable of generating a symbolic description of the map contents that may be imported into a GIS (e.g. ARC/INFO).<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122889536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Interpolation of cinematic sequences 电影序列的插值
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240329
Jordi Ribas-Corbera, J. Sklansky
Presents a new algorithm for interframe interpolation of cinematic sequences. The authors demonstrate its applicability to video data compression of pedestrian traffic and data compression for video conferencing. In both of these applications it is assumed that the background is nearly stationary and that there are no interobject occlusions. The interpolation algorithm makes use of estimates of optical flow to compensate for the motion of objects between two frames. We describe three major problems associated with motion compensated cinematic interpolation: interframe occlusion, interframe zooming and figure-ground ambiguity. Our algorithm suppresses artifacts caused by all three of these problems.<>
提出了一种新的电影序列帧间插值算法。论证了该方法在行人交通视频数据压缩和视频会议数据压缩中的适用性。在这两种应用中,都假设背景几乎是静止的,并且没有物体间的遮挡。该插值算法利用光流估计来补偿两帧之间物体的运动。我们描述了与运动补偿电影插值相关的三个主要问题:帧间遮挡,帧间缩放和图像-背景模糊。我们的算法抑制了由这三个问题引起的伪影。
{"title":"Interpolation of cinematic sequences","authors":"Jordi Ribas-Corbera, J. Sklansky","doi":"10.1109/ACV.1992.240329","DOIUrl":"https://doi.org/10.1109/ACV.1992.240329","url":null,"abstract":"Presents a new algorithm for interframe interpolation of cinematic sequences. The authors demonstrate its applicability to video data compression of pedestrian traffic and data compression for video conferencing. In both of these applications it is assumed that the background is nearly stationary and that there are no interobject occlusions. The interpolation algorithm makes use of estimates of optical flow to compensate for the motion of objects between two frames. We describe three major problems associated with motion compensated cinematic interpolation: interframe occlusion, interframe zooming and figure-ground ambiguity. Our algorithm suppresses artifacts caused by all three of these problems.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131512752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
New visual invariants for obstacle detection using optical flow induced from general motion 基于一般运动诱导光流的障碍物检测新视觉不变量
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240322
Gin-Shu Young, T. Hong, M. Herman, Jackson C. S. Yang
To operate autonomous vehicles safely, obstacles must be detected before any path planning and obstacle avoidance activity is undertaken. In the paper, a novel approach to obstacle detection is developed. New visual linear invariants based on optical flow have been developed. Employing the linear invariance property, obstacles an be directly detected by using a reference flow line obtained from measured optical flow. This method can be used for ground vehicles to navigate through man-made roadways or natural outdoor terrain or for air vehicles to land on known or unknown terrain.<>
为了使自动驾驶汽车安全运行,必须在进行任何路径规划和避障活动之前检测到障碍物。本文提出了一种新的障碍物检测方法。提出了一种新的基于光流的视觉线性不变量。利用光流的线性不变性,利用测量光流得到的参考流线可以直接检测障碍物。该方法可用于地面车辆在人造道路或室外自然地形中导航,也可用于飞行器在已知或未知地形上着陆。
{"title":"New visual invariants for obstacle detection using optical flow induced from general motion","authors":"Gin-Shu Young, T. Hong, M. Herman, Jackson C. S. Yang","doi":"10.1109/ACV.1992.240322","DOIUrl":"https://doi.org/10.1109/ACV.1992.240322","url":null,"abstract":"To operate autonomous vehicles safely, obstacles must be detected before any path planning and obstacle avoidance activity is undertaken. In the paper, a novel approach to obstacle detection is developed. New visual linear invariants based on optical flow have been developed. Employing the linear invariance property, obstacles an be directly detected by using a reference flow line obtained from measured optical flow. This method can be used for ground vehicles to navigate through man-made roadways or natural outdoor terrain or for air vehicles to land on known or unknown terrain.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131757970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Projectile impact detection and performance evaluation using machine vision 基于机器视觉的弹丸冲击检测与性能评估
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240309
B. Mobasseri
The paper reports on the development of a machine vision system for assessing targeting accuracy of ballistic, projectile-firing weapon systems. Current techniques rely on either manual optical sighting or acoustic signature to locate the point of impact. Optical sighting, still the predominant method in many events, is manual and imprecise. Acoustic-based approaches automate the process but require multiple sensor placements. The machine vision system described is able to continuously monitor the target, report precise quantitative targeting information and simultaneously provide a color-coded display of impacts. Special provisions have been built-in to account for target plane motion and overlapping impacts phenomenon.<>
本文报道了一种用于评估弹道射弹武器系统瞄准精度的机器视觉系统的开发。目前的技术依赖于手动光学瞄准或声学特征来定位撞击点。光学瞄准,在许多事件中仍然是主要的方法,是手工的和不精确的。基于声学的方法自动化了这一过程,但需要放置多个传感器。所描述的机器视觉系统能够持续监测目标,报告精确的定量目标信息,同时提供颜色编码的影响显示。特别规定已内置,以说明目标平面运动和重叠的冲击现象。
{"title":"Projectile impact detection and performance evaluation using machine vision","authors":"B. Mobasseri","doi":"10.1109/ACV.1992.240309","DOIUrl":"https://doi.org/10.1109/ACV.1992.240309","url":null,"abstract":"The paper reports on the development of a machine vision system for assessing targeting accuracy of ballistic, projectile-firing weapon systems. Current techniques rely on either manual optical sighting or acoustic signature to locate the point of impact. Optical sighting, still the predominant method in many events, is manual and imprecise. Acoustic-based approaches automate the process but require multiple sensor placements. The machine vision system described is able to continuously monitor the target, report precise quantitative targeting information and simultaneously provide a color-coded display of impacts. Special provisions have been built-in to account for target plane motion and overlapping impacts phenomenon.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114697969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A shadow handler in a video-based real-time traffic monitoring system 基于视频的实时交通监控系统中的影子处理器
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240332
M. Kilger
A video-based system for traffic monitoring is presented. The objective of the system is to set up a high-level description of the traffic scene comprising the position, speed and class of the vehicles. Algorithms for detecting moving objects, separating the vehicles from their shadows, tracking and classification are presented. The classification of vehicles under sunny conditions is very difficult, if the shadow isn't separated from the vehicles. This approach for classification runs in real-time on low-cost hardware. the shadow can be separated from the vehicle and the knowledge about the shape of the shadow can be efficiently used. The shadow analysis algorithm itself uses high-level knowledge about the geometry of the scene (heading of the observed road) and about global data (date and time).<>
提出了一种基于视频的交通监控系统。该系统的目标是建立交通场景的高级描述,包括车辆的位置,速度和类别。提出了运动目标的检测、车辆与阴影的分离、跟踪和分类算法。在阳光充足的条件下,如果没有将阴影与车辆分开,则很难对车辆进行分类。这种分类方法在低成本硬件上实时运行。影子可以从车辆中分离出来,并且可以有效地利用关于影子形状的知识。阴影分析算法本身使用了关于场景几何(观察道路的方向)和全局数据(日期和时间)的高级知识。
{"title":"A shadow handler in a video-based real-time traffic monitoring system","authors":"M. Kilger","doi":"10.1109/ACV.1992.240332","DOIUrl":"https://doi.org/10.1109/ACV.1992.240332","url":null,"abstract":"A video-based system for traffic monitoring is presented. The objective of the system is to set up a high-level description of the traffic scene comprising the position, speed and class of the vehicles. Algorithms for detecting moving objects, separating the vehicles from their shadows, tracking and classification are presented. The classification of vehicles under sunny conditions is very difficult, if the shadow isn't separated from the vehicles. This approach for classification runs in real-time on low-cost hardware. the shadow can be separated from the vehicle and the knowledge about the shape of the shadow can be efficiently used. The shadow analysis algorithm itself uses high-level knowledge about the geometry of the scene (heading of the observed road) and about global data (date and time).<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131112316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 214
A segmentation method for multi-connected particle delineation 一种多连通粒子分割方法
Pub Date : 1992-11-30 DOI: 10.1109/ACV.1992.240305
Xing-Qiang Wu, J. Kemeny
An automatic particle segmentation system is developed for calculating the size distribution of rock fragments created by blasting. A rock composite due to blasting is often fully multi-connected in which individual particles cannot be delineated by the existing segmentation algorithms. Two algorithms are proposed to approach this multi-connected segmentation problem. The first algorithm analyzes the shape of each shadow (a simply-connected region) and 'splits' the particles from shadow boundary convexity points if a relatively large gradient path occurs. The second algorithm finds clusters of rock particles which may not be delineated due to the lack of a strong gradient along the touching portions and delineates them using a shape heuristics. A large number of test results show that the method is fast and accurate.<>
研制了一种自动颗粒分割系统,用于计算爆破产生的岩屑尺寸分布。由爆破引起的岩石复合材料通常是完全多连通的,其中单个颗粒无法用现有的分割算法来描绘。针对这种多连接分割问题,提出了两种算法。第一种算法分析每个阴影(单连通区域)的形状,如果出现相对较大的梯度路径,则从阴影边界凸点“分裂”粒子。第二种算法找到可能由于缺乏沿接触部分的强梯度而无法描绘的岩石颗粒簇,并使用形状启发式描绘它们。大量试验结果表明,该方法快速、准确。
{"title":"A segmentation method for multi-connected particle delineation","authors":"Xing-Qiang Wu, J. Kemeny","doi":"10.1109/ACV.1992.240305","DOIUrl":"https://doi.org/10.1109/ACV.1992.240305","url":null,"abstract":"An automatic particle segmentation system is developed for calculating the size distribution of rock fragments created by blasting. A rock composite due to blasting is often fully multi-connected in which individual particles cannot be delineated by the existing segmentation algorithms. Two algorithms are proposed to approach this multi-connected segmentation problem. The first algorithm analyzes the shape of each shadow (a simply-connected region) and 'splits' the particles from shadow boundary convexity points if a relatively large gradient path occurs. The second algorithm finds clusters of rock particles which may not be delineated due to the lack of a strong gradient along the touching portions and delineates them using a shape heuristics. A large number of test results show that the method is fast and accurate.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124296324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
[1992] Proceedings IEEE Workshop on Applications of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1