首页 > 最新文献

2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)最新文献

英文 中文
Estimating Plant Centers Using A Deep Binary Classifier 使用深度二值分类器估计植物中心
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470367
Yuhao Chen, Javier Ribera, E. Delp
Phenotyping is the process of estimating the physical and chemical properties of a plant. Traditional phenotyping is labor intensive and time consuming. These measurements can be obtained faster by collecting aerial images with an Unmanned Aerial Vehicle (UAV) and analyzing them using modern image analysis technologies. We propose a method to estimate plant centers by classifying each pixel as a plant center or not a plant center. We then label the center of each cluster as the plant location. We studied the performance of our method on two datasets. We achieved 84% precision and 90% recall on one dataset consisting of early stage plants and 62% precision and 77% recall on another dataset consisting of later stage plants.
表型是估计植物的物理和化学特性的过程。传统的表型分析是劳动密集型和耗时的。通过使用无人驾驶飞行器(UAV)收集航空图像并使用现代图像分析技术对其进行分析,可以更快地获得这些测量结果。我们提出了一种通过将每个像素分类为植物中心或非植物中心来估计植物中心的方法。然后我们将每个簇的中心标记为植物位置。我们在两个数据集上研究了我们的方法的性能。我们在一个由早期植物组成的数据集上实现了84%的精度和90%的召回率,在另一个由后期植物组成的数据集上实现了62%的精度和77%的召回率。
{"title":"Estimating Plant Centers Using A Deep Binary Classifier","authors":"Yuhao Chen, Javier Ribera, E. Delp","doi":"10.1109/SSIAI.2018.8470367","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470367","url":null,"abstract":"Phenotyping is the process of estimating the physical and chemical properties of a plant. Traditional phenotyping is labor intensive and time consuming. These measurements can be obtained faster by collecting aerial images with an Unmanned Aerial Vehicle (UAV) and analyzing them using modern image analysis technologies. We propose a method to estimate plant centers by classifying each pixel as a plant center or not a plant center. We then label the center of each cluster as the plant location. We studied the performance of our method on two datasets. We achieved 84% precision and 90% recall on one dataset consisting of early stage plants and 62% precision and 77% recall on another dataset consisting of later stage plants.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127387521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sleep Analysis Using Motion and Head Detection 使用运动和头部检测进行睡眠分析
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470323
J. Choe, D. M. Montserrat, A. Schwichtenberg, E. Delp
Videosomnography (VSG) is a range of video-based methods used to record and assess sleep vs. wake states in adults and children. Traditional behavioral-VSG (B-VSG) coding requires almost real-time visual inspection by a trained technicians/coders to determine sleep vs wake states. In this paper we describe an automated VSG sleep detection system (auto-VSG) which employs motion analysis to determine sleep vs. wake states in young children. We used child head size to normalize the motion index and to provide an individual motion maximum for each child. We compared the proposed auto-VSG method to (1) traditional B-VSG codes and (2) actigraphy sleep vs. wake estimates across four sleep parameters: sleep onset time, sleep offset time, awake duration, and sleep duration. In sum, analyses revealed that estimates generated from the proposed auto-VSG method and B-VSG are comparable.
视频睡眠记录仪(VSG)是一系列基于视频的方法,用于记录和评估成人和儿童的睡眠和清醒状态。传统的行为- vsg (B-VSG)编码需要训练有素的技术人员/编码器进行几乎实时的视觉检查,以确定睡眠和清醒状态。在本文中,我们描述了一个自动VSG睡眠检测系统(auto-VSG),该系统采用运动分析来确定幼儿的睡眠和清醒状态。我们使用儿童头部大小来标准化运动指数,并为每个儿童提供单独的运动最大值。我们将提出的自动vsg方法与(1)传统的B-VSG编码和(2)在四个睡眠参数(睡眠开始时间、睡眠偏移时间、清醒持续时间和睡眠持续时间)上的睡眠与清醒估计进行了比较。总而言之,分析表明,所提出的auto-VSG方法和B-VSG方法产生的估计具有可比性。
{"title":"Sleep Analysis Using Motion and Head Detection","authors":"J. Choe, D. M. Montserrat, A. Schwichtenberg, E. Delp","doi":"10.1109/SSIAI.2018.8470323","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470323","url":null,"abstract":"Videosomnography (VSG) is a range of video-based methods used to record and assess sleep vs. wake states in adults and children. Traditional behavioral-VSG (B-VSG) coding requires almost real-time visual inspection by a trained technicians/coders to determine sleep vs wake states. In this paper we describe an automated VSG sleep detection system (auto-VSG) which employs motion analysis to determine sleep vs. wake states in young children. We used child head size to normalize the motion index and to provide an individual motion maximum for each child. We compared the proposed auto-VSG method to (1) traditional B-VSG codes and (2) actigraphy sleep vs. wake estimates across four sleep parameters: sleep onset time, sleep offset time, awake duration, and sleep duration. In sum, analyses revealed that estimates generated from the proposed auto-VSG method and B-VSG are comparable.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121619656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The precision of triangulation in monocular visual odometry 单眼视觉里程计中三角测量的精度
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470356
Nolang Fanani, R. Mester
We analyze the depth reconstruction precision and sensitivity of two-frame triangulation for the case of general motion, and focus on the case of monocular visual odometry, that is: a single camera looking mostly in the direction of motion. The results confirm intuitive assumptions about the limited triangulation precision close to the focus of expansion.
分析了两帧三角剖分法在一般运动情况下的深度重建精度和灵敏度,并重点研究了单目视觉测程的情况,即单摄像机主要观察运动方向。结果证实了在膨胀焦点附近三角剖分精度有限的直观假设。
{"title":"The precision of triangulation in monocular visual odometry","authors":"Nolang Fanani, R. Mester","doi":"10.1109/SSIAI.2018.8470356","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470356","url":null,"abstract":"We analyze the depth reconstruction precision and sensitivity of two-frame triangulation for the case of general motion, and focus on the case of monocular visual odometry, that is: a single camera looking mostly in the direction of motion. The results confirm intuitive assumptions about the limited triangulation precision close to the focus of expansion.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132414310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Natural Statistics of Chromatic Images 彩色图像的自然统计
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470308
Zeina Sinno, A. Bovik
The visual brain is optimally designed to process images from the natural environment that we perceive. Describing the natural environment statistically helps in understanding how the brain encodes those images efficiently. The Natural Scene Statistics (NSS) of the luminance component of images is the basis of several univariate and bivariate statistical models. The NSS of other colors or chromatic components have been less well-analyzed. In this paper, we study the univariate and bivariate NSS of luminance and other chromatic components and how they relate.
视觉大脑的最佳设计是处理我们所感知的自然环境中的图像。从统计学上描述自然环境有助于理解大脑如何有效地对这些图像进行编码。图像亮度分量的自然场景统计(NSS)是几种单变量和双变量统计模型的基础。其他颜色或色彩成分的NSS分析得较少。本文研究了亮度和其他色度分量的单变量和双变量NSS,以及它们之间的关系。
{"title":"On the Natural Statistics of Chromatic Images","authors":"Zeina Sinno, A. Bovik","doi":"10.1109/SSIAI.2018.8470308","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470308","url":null,"abstract":"The visual brain is optimally designed to process images from the natural environment that we perceive. Describing the natural environment statistically helps in understanding how the brain encodes those images efficiently. The Natural Scene Statistics (NSS) of the luminance component of images is the basis of several univariate and bivariate statistical models. The NSS of other colors or chromatic components have been less well-analyzed. In this paper, we study the univariate and bivariate NSS of luminance and other chromatic components and how they relate.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134499808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Context-Sensitive Human Activity Classification in Collaborative Learning Environments 协作学习环境中上下文敏感的人类活动分类
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470331
A. Jacoby, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva
Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.
由于强烈需要消除结构噪声,大量可能的活动以及视频采集中的强烈变化,人类活动分类仍然具有挑战性。本文探讨了协作学习环境下人类活动分类的研究。本文探讨了基于颜色的对象检测与对象交互上下文化相结合的使用,以隔离特定于每个人类活动的运动向量。基本方法是为每个活动使用单独的分类器。在这里,我们考虑原始视频中打字、写作和谈话活动的检测。该方法使用了43个未裁剪的视频片段进行测试,其中620帧用于书写,1050帧用于打字,1755帧用于说话。使用简单的KNN分类器,该方法的写作准确率为72.6%,打字准确率为71%,说话准确率为84.6%。使用深度神经网络,分类准确率提高到92.5%(写作),82.5%(打字)和99.7%(谈话)。
{"title":"Context-Sensitive Human Activity Classification in Collaborative Learning Environments","authors":"A. Jacoby, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva","doi":"10.1109/SSIAI.2018.8470331","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470331","url":null,"abstract":"Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132777854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
High-homogeneity functional parcellation of human brain for investigating robust functional connectivity 人脑高同质性功能分割研究鲁棒性功能连接
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470321
Xiangyu Liu, Hua Xie, B. Nutter, S. Mitra
Over the years, resting state functional magnetic resonance imaging (rsfMRI) has been a preferred design tool to analyze human brain functions and brain parcellations. Several different statistical methods have been proposed to study functional connectivity and generate various parcellation atlases based on corresponding connectivity maps. In this study, we employ a sliding window correlation method to generate accurate individual voxel-wise dynamic functional connectivity maps, based on which the brain can be parcellated into highly homogeneous functional parcels. Because there is no ground truth for functional brain parcellation, we accomplish parcellation via k-means clustering to compare with other available parcellations. With temporal characteristics of functional connectivity taken into consideration, high homogeneity can be observed in high resolution parcellation of human brain.
多年来,静息状态功能磁共振成像(rsfMRI)一直是分析人脑功能和脑包裹的首选设计工具。人们提出了几种不同的统计方法来研究功能连通性,并根据相应的连通性图生成各种分区地图集。在这项研究中,我们采用滑动窗口相关方法来生成准确的个体体素动态功能连接图,在此基础上,大脑可以被分割成高度均匀的功能包。因为没有关于功能性脑分割的基本事实,我们通过k-means聚类来完成分割,以与其他可用的分割进行比较。考虑到功能连接的时间特征,在人脑高分辨率分割中可以观察到高度的同质性。
{"title":"High-homogeneity functional parcellation of human brain for investigating robust functional connectivity","authors":"Xiangyu Liu, Hua Xie, B. Nutter, S. Mitra","doi":"10.1109/SSIAI.2018.8470321","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470321","url":null,"abstract":"Over the years, resting state functional magnetic resonance imaging (rsfMRI) has been a preferred design tool to analyze human brain functions and brain parcellations. Several different statistical methods have been proposed to study functional connectivity and generate various parcellation atlases based on corresponding connectivity maps. In this study, we employ a sliding window correlation method to generate accurate individual voxel-wise dynamic functional connectivity maps, based on which the brain can be parcellated into highly homogeneous functional parcels. Because there is no ground truth for functional brain parcellation, we accomplish parcellation via k-means clustering to compare with other available parcellations. With temporal characteristics of functional connectivity taken into consideration, high homogeneity can be observed in high resolution parcellation of human brain.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133010714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Supervised Classifiers for Damage Scoring of Zebrafish Neuromasts 监督分类器对斑马鱼神经细胞损伤评分的性能研究
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470377
Rohit C. Philip, Sree Ramya S. P. Malladi, M. Niihori, A. Jacob, Jeffrey J. Rodríguez
Supervised machine learning schemes are widely used to perform classification tasks. There is a wide variety of classifiers in use today, such as single- and multi-class support vector machines, k-nearest neighbors, decision trees, random forests, naive Bayes classifiers with or without kernel density estimation, linear discriminant analysis, quadratic discriminant analysis, and numerous neural network architectures. Our prior work used high-level shape, intensity, and texture features as predictors in a single-class support vector machine classifier to classify images of zebrafish neuromasts obtained using confocal microscopy into four discrete damage classes. Here, we analyze the performance of a multitude of supervised classifiers in terms of mean absolute error using these high-level features as predictors. In addition, we also analyze performance while using raw pixel data as predictors.
监督机器学习方案被广泛用于执行分类任务。目前使用的分类器种类繁多,例如单类和多类支持向量机、k近邻、决策树、随机森林、带或不带核密度估计的朴素贝叶斯分类器、线性判别分析、二次判别分析和许多神经网络架构。我们之前的工作在单类支持向量机分类器中使用高级形状、强度和纹理特征作为预测因子,将使用共聚焦显微镜获得的斑马鱼神经鞘图像分为四个离散的损伤类别。在这里,我们使用这些高级特征作为预测因子,从平均绝对误差的角度分析了大量监督分类器的性能。此外,我们还使用原始像素数据作为预测因子来分析性能。
{"title":"Performance of Supervised Classifiers for Damage Scoring of Zebrafish Neuromasts","authors":"Rohit C. Philip, Sree Ramya S. P. Malladi, M. Niihori, A. Jacob, Jeffrey J. Rodríguez","doi":"10.1109/SSIAI.2018.8470377","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470377","url":null,"abstract":"Supervised machine learning schemes are widely used to perform classification tasks. There is a wide variety of classifiers in use today, such as single- and multi-class support vector machines, k-nearest neighbors, decision trees, random forests, naive Bayes classifiers with or without kernel density estimation, linear discriminant analysis, quadratic discriminant analysis, and numerous neural network architectures. Our prior work used high-level shape, intensity, and texture features as predictors in a single-class support vector machine classifier to classify images of zebrafish neuromasts obtained using confocal microscopy into four discrete damage classes. Here, we analyze the performance of a multitude of supervised classifiers in terms of mean absolute error using these high-level features as predictors. In addition, we also analyze performance while using raw pixel data as predictors.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131608591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient GPU-based implementation of the median filter based on a multi-pixel-per-thread framework 基于多像素/线程框架的中值过滤器的高效gpu实现
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470318
Gabriel Salvador, Juan M. Chau, Jorge Quesada, Cesar Carranza
Median filtering has become a ubiquitous smoothing tool for image denoising tasks, with its complexity generally determined by the median algorithm used (usually on the order of O(n log(n)) when computing the median of n elements). Most algorithms were formulated for scalar single processor computers, with few of them successfully adapted and implemented for computers with a parallel architecture. However, the redundancy for processing neighboring pixels has not yet been fully exploited for parallel implementations. Additionally, most of the implementations are only suitable for fixed point images, but not for floating point.In this paper we propose an efficient parallel implementation of the 2D median filter, based on a multiple pixel-per-thread framework, and test its implementation on a CUDA-capable GPU either for fixed point or floating point data. Our computational results show that our proposed methods outperforms state-of the art implementations, with the difference increasing significantly as the filter size grows.
中值滤波已成为图像去噪任务中普遍使用的平滑工具,其复杂度一般由所使用的中值算法决定(通常在计算n个元素的中值时为O(n log(n))阶)。大多数算法都是为标量单处理器计算机制定的,很少有算法成功地适应并实现了并行架构的计算机。然而,处理相邻像素的冗余还没有被充分利用到并行实现中。此外,大多数实现只适用于定点图像,而不适用于浮点图像。在本文中,我们提出了一种基于多像素/线程框架的二维中值滤波器的高效并行实现,并在支持cuda的GPU上测试其对定点或浮点数据的实现。我们的计算结果表明,我们提出的方法优于最先进的实现,随着过滤器大小的增加,差异显着增加。
{"title":"Efficient GPU-based implementation of the median filter based on a multi-pixel-per-thread framework","authors":"Gabriel Salvador, Juan M. Chau, Jorge Quesada, Cesar Carranza","doi":"10.1109/SSIAI.2018.8470318","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470318","url":null,"abstract":"Median filtering has become a ubiquitous smoothing tool for image denoising tasks, with its complexity generally determined by the median algorithm used (usually on the order of O(n log(n)) when computing the median of n elements). Most algorithms were formulated for scalar single processor computers, with few of them successfully adapted and implemented for computers with a parallel architecture. However, the redundancy for processing neighboring pixels has not yet been fully exploited for parallel implementations. Additionally, most of the implementations are only suitable for fixed point images, but not for floating point.In this paper we propose an efficient parallel implementation of the 2D median filter, based on a multiple pixel-per-thread framework, and test its implementation on a CUDA-capable GPU either for fixed point or floating point data. Our computational results show that our proposed methods outperforms state-of the art implementations, with the difference increasing significantly as the filter size grows.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125870840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Viola-Jones Algorithm for Automatic Detection of Hyperbolic Regions in GPR Profiles of Bridge Decks 桥面GPR剖面双曲区域自动检测的Viola-Jones算法
Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470374
Mohammed Abdul Rahman, T. Zayed
Ground Penetrating Radar (GPR) is widely utilized as a Non-destructive technique by transportation authorities for inspection of bridge decks due to its ability to identify major subsurface defects in a short span of time. The attenuation of recorded signal at rebar level form a characteristic hyperbolic shape in profiles obtained from GPR scans and corresponds to the corrosiveness state of concrete. The detection of these hyperbolic regions is of paramount importance and is a precursor to successful interpretation of GPR data. This paper aims to automate the detection of hyperbolic regions or hyperbolas in GPR profiles based on Viola-Jones Algorithm. A custom detector is obtained through training with numerous samples of hyperbolas over multiple stages. The detection is achieved through the developed detector and it was applied over a complete bridge deck for validation purpose. The eventual goal of such detection is to facilitate the automation of GPR data analysis.
由于探地雷达(GPR)能够在短时间内识别出主要的地下缺陷,因此它作为一种无损检测技术被交通部门广泛应用于桥面检测。在探地雷达扫描得到的剖面中,钢筋水平记录信号的衰减形成了一个特征的双曲线形状,与混凝土的腐蚀状态相对应。这些双曲线区域的探测至关重要,是探地雷达数据成功解释的前兆。本文旨在基于Viola-Jones算法实现探地雷达剖面中双曲区域或双曲线的自动检测。通过对多个阶段的大量双曲线样本进行训练,获得自定义检测器。通过开发的检测器实现检测,并将其应用于完整的桥面以进行验证。这种检测的最终目标是促进探地雷达数据分析的自动化。
{"title":"Viola-Jones Algorithm for Automatic Detection of Hyperbolic Regions in GPR Profiles of Bridge Decks","authors":"Mohammed Abdul Rahman, T. Zayed","doi":"10.1109/SSIAI.2018.8470374","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470374","url":null,"abstract":"Ground Penetrating Radar (GPR) is widely utilized as a Non-destructive technique by transportation authorities for inspection of bridge decks due to its ability to identify major subsurface defects in a short span of time. The attenuation of recorded signal at rebar level form a characteristic hyperbolic shape in profiles obtained from GPR scans and corresponds to the corrosiveness state of concrete. The detection of these hyperbolic regions is of paramount importance and is a precursor to successful interpretation of GPR data. This paper aims to automate the detection of hyperbolic regions or hyperbolas in GPR profiles based on Viola-Jones Algorithm. A custom detector is obtained through training with numerous samples of hyperbolas over multiple stages. The detection is achieved through the developed detector and it was applied over a complete bridge deck for validation purpose. The eventual goal of such detection is to facilitate the automation of GPR data analysis.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116829782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Reflectance Based Method For Shadow Detection and Removal 基于反射率的阴影检测与去除方法
Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470343
S. Yarlagadda, F. Zhu
Shadows are common aspect of images and when left undetected can hinder scene understanding and visual processing. We propose a simple yet effective approach based on reflectance to detect shadows from single image. An image is first segmented and based on the reflectance, illumination and texture characteristics, segments pairs are identified as shadow and non-shadow pairs. The proposed method is tested on two publicly available and widely used datasets. Our method achieves higher accuracy in detecting shadows compared to previous reported methods despite requiring fewer parameters. We also show results of shadow-free images by relighting the pixels in the detected shadow regions.
阴影是图像的常见方面,如果不被发现,可能会阻碍对场景的理解和视觉处理。提出了一种简单有效的基于反射率的单幅图像阴影检测方法。首先对图像进行分割,根据反射率、照度和纹理特征,将分割对识别为阴影对和非阴影对。在两个公开可用且广泛使用的数据集上对所提出的方法进行了测试。与之前报道的方法相比,我们的方法在检测阴影方面获得了更高的精度,尽管需要的参数更少。我们还通过重新照亮检测到的阴影区域中的像素来显示无阴影图像的结果。
{"title":"A Reflectance Based Method For Shadow Detection and Removal","authors":"S. Yarlagadda, F. Zhu","doi":"10.1109/SSIAI.2018.8470343","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470343","url":null,"abstract":"Shadows are common aspect of images and when left undetected can hinder scene understanding and visual processing. We propose a simple yet effective approach based on reflectance to detect shadows from single image. An image is first segmented and based on the reflectance, illumination and texture characteristics, segments pairs are identified as shadow and non-shadow pairs. The proposed method is tested on two publicly available and widely used datasets. Our method achieves higher accuracy in detecting shadows compared to previous reported methods despite requiring fewer parameters. We also show results of shadow-free images by relighting the pixels in the detected shadow regions.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132977132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1