首页 > 最新文献

2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)最新文献

英文 中文
In-between and cross-frequency dependence-based summarization of resting-state fMRI data 基于中间和交叉频率依赖的静息状态fMRI数据汇总
Pub Date : 2018-09-21 DOI: 10.1109/SSIAI.2018.8470314
Maziar Yaesoubi, Rogers F. Silva, V. Calhoun
Various data summarization approaches which consist of basis transformation and dimension reduction have been commonly used for information retrieval from brain imaging data including functional magnetic resonance imaging (fMRI). However, most approaches do not include frequency variation of the temporal data in the basis transformation. Here we propose a novel approach to incorporate in-between and cross-frequency dependence for summarization of resting-state fMRI data.
从包括功能磁共振成像(fMRI)在内的脑成像数据中提取信息,常用的数据汇总方法包括基变换和降维。然而,大多数方法在基变换中不包括时间数据的频率变化。在这里,我们提出了一种新的方法来合并中间和交叉频率依赖,以总结静息状态fMRI数据。
{"title":"In-between and cross-frequency dependence-based summarization of resting-state fMRI data","authors":"Maziar Yaesoubi, Rogers F. Silva, V. Calhoun","doi":"10.1109/SSIAI.2018.8470314","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470314","url":null,"abstract":"Various data summarization approaches which consist of basis transformation and dimension reduction have been commonly used for information retrieval from brain imaging data including functional magnetic resonance imaging (fMRI). However, most approaches do not include frequency variation of the temporal data in the basis transformation. Here we propose a novel approach to incorporate in-between and cross-frequency dependence for summarization of resting-state fMRI data.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129488251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drive-Net: Convolutional Network for Driver Distraction Detection 驱动网络:用于驾驶员分心检测的卷积网络
Pub Date : 2018-09-21 DOI: 10.1109/SSIAI.2018.8470309
Mohammed S. Majdi, Sundaresh Ram, Jonathan T. Gill, Jeffrey J. Rodríguez
To help prevent motor vehicle accidents, there has been significant interest in finding an automated method to recognize signs of driver distraction, such as talking to passengers, fixing hair and makeup, eating and drinking, and using a mobile phone. In this paper, we present an automated supervised learning method called Drive-Net for driver distraction detection. Drive-Net uses a combination of a convolutional neural network (CNN) and a random decision forest for classifying images of a driver. We compare the performance of our proposed Drive-Net to two other popular machine-learning approaches: a recurrent neural network (RNN), and a multi-layer perceptron (MLP). We test the methods on a publicly available database of images acquired under a controlled environment containing about 22425 images manually annotated by an expert. Results show that Drive-Net achieves a detection accuracy of 95%, which is 2% more than the best results obtained on the same database using other methods.
为了防止机动车事故的发生,人们对寻找一种自动识别驾驶员分心迹象的方法非常感兴趣,比如与乘客交谈、整理头发和化妆、饮食和使用手机。在本文中,我们提出了一种自动监督学习方法,称为Drive-Net,用于驾驶员分心检测。Drive-Net使用卷积神经网络(CNN)和随机决策森林的组合来对驾驶员的图像进行分类。我们将我们提出的驱动网络的性能与另外两种流行的机器学习方法进行了比较:递归神经网络(RNN)和多层感知器(MLP)。我们在一个公开可用的图像数据库上测试了这些方法,该数据库是在一个受控环境下获得的,其中包含由专家手动注释的大约22425张图像。结果表明,Drive-Net的检测准确率达到95%,比使用其他方法在相同数据库上获得的最佳结果高出2%。
{"title":"Drive-Net: Convolutional Network for Driver Distraction Detection","authors":"Mohammed S. Majdi, Sundaresh Ram, Jonathan T. Gill, Jeffrey J. Rodríguez","doi":"10.1109/SSIAI.2018.8470309","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470309","url":null,"abstract":"To help prevent motor vehicle accidents, there has been significant interest in finding an automated method to recognize signs of driver distraction, such as talking to passengers, fixing hair and makeup, eating and drinking, and using a mobile phone. In this paper, we present an automated supervised learning method called Drive-Net for driver distraction detection. Drive-Net uses a combination of a convolutional neural network (CNN) and a random decision forest for classifying images of a driver. We compare the performance of our proposed Drive-Net to two other popular machine-learning approaches: a recurrent neural network (RNN), and a multi-layer perceptron (MLP). We test the methods on a publicly available database of images acquired under a controlled environment containing about 22425 images manually annotated by an expert. Results show that Drive-Net achieves a detection accuracy of 95%, which is 2% more than the best results obtained on the same database using other methods.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125236171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Graph Modularity and Randomness Measures : A Comparative Study 图的模块化和随机性度量:比较研究
Pub Date : 2018-09-21 DOI: 10.1109/SSIAI.2018.8470322
V. Vergara, Qingbao Yu, V. Calhoun
The human brain connectome exhibits a specific structure diagram that is understood to not be wired for randomness. However, aberrant connectivity has been detected and moreover linked to multiple neuropsychiatric and neurological diseases. Graph theory has provided a set of methods to evaluate disruption of brain structure organization. An alternative approach evaluates the difference between brain connectivity matrices and random matrices aiming at assessing randomness. This work compares both approaches within the context of random connectivity. Results indicate the correlation between the two assessments depends on the degree and can be as high as 0.3. Consequently, the two concepts can be treated as complementary, but addressing different aspects of randomness.
人类大脑连接组展示了一个特定的结构图,该结构图被认为不是随机连接的。然而,异常连接已被发现,而且与多种神经精神和神经系统疾病有关。图论提供了一套评估大脑结构组织破坏的方法。另一种方法是评估大脑连接矩阵和随机矩阵之间的差异,旨在评估随机性。这项工作在随机连接的背景下比较了这两种方法。结果表明,两种评价之间的相关性取决于程度,最高可达0.3。因此,这两个概念可以看作是互补的,但处理的是随机性的不同方面。
{"title":"Graph Modularity and Randomness Measures : A Comparative Study","authors":"V. Vergara, Qingbao Yu, V. Calhoun","doi":"10.1109/SSIAI.2018.8470322","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470322","url":null,"abstract":"The human brain connectome exhibits a specific structure diagram that is understood to not be wired for randomness. However, aberrant connectivity has been detected and moreover linked to multiple neuropsychiatric and neurological diseases. Graph theory has provided a set of methods to evaluate disruption of brain structure organization. An alternative approach evaluates the difference between brain connectivity matrices and random matrices aiming at assessing randomness. This work compares both approaches within the context of random connectivity. Results indicate the correlation between the two assessments depends on the degree and can be as high as 0.3. Consequently, the two concepts can be treated as complementary, but addressing different aspects of randomness.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Ground-Truth Fusion Method for Image Segmentation Evaluation 一种基于真值融合的图像分割评价方法
Pub Date : 2018-09-21 DOI: 10.1109/SSIAI.2018.8470317
Sree Ramya S. P. Malladi, Sundaresh Ram, Jeffrey J. Rodríguez
Image segmentation evaluation is popularly categorized into two different approaches based on whether the evaluation uses a human expert’s manual segmentation as a reference or not. When comparing automated segmentation against manual segmentation, also referred to as the ground-truth segmentation, multiple ground-truths are usually available. Much research has been done on analysis of segmentation algorithms and performance metrics, but very little study has been done on analyzing techniques for ground-truth fusion from multiple ground-truth segmentations. We propose a hybrid ground-truth fusion technique for image segmentation evaluation and compare it with other existing ground-truth fusion methods on a data set having multiple ground-truths at various coarseness levels. Qualitative and quantitative results show that the proposed method provides improved segmentation evaluation performance.
基于评估是否使用人类专家的人工分割作为参考,通常将图像分割评估分为两种不同的方法。当比较自动分割和手动分割(也称为基础事实分割)时,通常有多个基础事实可用。对分割算法和性能指标的分析研究较多,但对多段真值分割的真值融合分析技术的研究很少。我们提出了一种用于图像分割评估的混合真值融合技术,并将其与现有的在不同粗糙程度下具有多个真值的数据集上的其他真值融合方法进行了比较。定性和定量结果表明,该方法具有较好的分割评价性能。
{"title":"A Ground-Truth Fusion Method for Image Segmentation Evaluation","authors":"Sree Ramya S. P. Malladi, Sundaresh Ram, Jeffrey J. Rodríguez","doi":"10.1109/SSIAI.2018.8470317","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470317","url":null,"abstract":"Image segmentation evaluation is popularly categorized into two different approaches based on whether the evaluation uses a human expert’s manual segmentation as a reference or not. When comparing automated segmentation against manual segmentation, also referred to as the ground-truth segmentation, multiple ground-truths are usually available. Much research has been done on analysis of segmentation algorithms and performance metrics, but very little study has been done on analyzing techniques for ground-truth fusion from multiple ground-truth segmentations. We propose a hybrid ground-truth fusion technique for image segmentation evaluation and compare it with other existing ground-truth fusion methods on a data set having multiple ground-truths at various coarseness levels. Qualitative and quantitative results show that the proposed method provides improved segmentation evaluation performance.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130281860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SHAPE ADAPTIVE ACCELERATED PARAMETER OPTIMIZATION 形状自适应加速参数优化
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470380
A. Yezzi, N. Dahiya
Computer vision based localization and pose estimation of known objects within camera images is often approached by optimizing some sort of fitting cost with respect to a small number of parameters including both pose parameters as well as additional parameters which describe a limited set of variations of the object shape learned through training. Gradient descent based searches are typically employed but the problem of how to "weigh" the gradient components arises and can often impact successful localization. This paper describes an automated, shape-adaptive way to choose the parameter weighting dynamically during the fitting process applicable to both standard gradient descent or momentum based accelerated gradient descent approaches.
基于计算机视觉的相机图像中已知物体的定位和姿态估计通常是通过优化少量参数的某种拟合成本来实现的,这些参数包括姿态参数以及描述通过训练学习的物体形状的有限变化集的附加参数。基于梯度下降的搜索通常被采用,但是如何“权衡”梯度分量的问题出现了,并且经常会影响成功的定位。本文描述了一种在拟合过程中自动、形状自适应地动态选择参数权重的方法,该方法既适用于标准梯度下降法,也适用于基于动量的加速梯度下降法。
{"title":"SHAPE ADAPTIVE ACCELERATED PARAMETER OPTIMIZATION","authors":"A. Yezzi, N. Dahiya","doi":"10.1109/SSIAI.2018.8470380","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470380","url":null,"abstract":"Computer vision based localization and pose estimation of known objects within camera images is often approached by optimizing some sort of fitting cost with respect to a small number of parameters including both pose parameters as well as additional parameters which describe a limited set of variations of the object shape learned through training. Gradient descent based searches are typically employed but the problem of how to \"weigh\" the gradient components arises and can often impact successful localization. This paper describes an automated, shape-adaptive way to choose the parameter weighting dynamically during the fitting process applicable to both standard gradient descent or momentum based accelerated gradient descent approaches.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conjointly Space and 2D Frequency Localized Filterbanks 联合空间和二维频率局域滤波器组
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470386
P. Tay, Yanjun Yan
This paper proposes a conjointly space-frequency well localized separable 2D filters. The separable 2D filterbanks constitute a perfect or near perfect reconstruction system. The novel space-frequency localization measure to determine optimality is the product of a filter’s 2D variance in space and 2D frequency variance. The particle swarm optimization method is efficiently applied to determine perfect or near perfect reconstruction optimal 2D filterbanks.
提出了一种联合空频良定域二维可分离滤波器。可分离的二维滤波器组构成了一个完美或接近完美的重构系统。确定最优性的新型空频定位措施是滤波器在空间上的二维方差和二维频率方差的乘积。将粒子群优化方法有效地应用于确定完美或接近完美重建的最优二维滤波器组。
{"title":"Conjointly Space and 2D Frequency Localized Filterbanks","authors":"P. Tay, Yanjun Yan","doi":"10.1109/SSIAI.2018.8470386","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470386","url":null,"abstract":"This paper proposes a conjointly space-frequency well localized separable 2D filters. The separable 2D filterbanks constitute a perfect or near perfect reconstruction system. The novel space-frequency localization measure to determine optimality is the product of a filter’s 2D variance in space and 2D frequency variance. The particle swarm optimization method is efficiently applied to determine perfect or near perfect reconstruction optimal 2D filterbanks.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126837386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A NOVEL SEMI-SUPERVISED DETECTION APPROACH WITH WEAK ANNOTATION 一种新的弱标注半监督检测方法
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470307
Eric K. Tokuda, Gabriel B. A. Ferreira, Cláudio T. Silva, R. M. C. Junior
In this work we propose a semi-supervised learning approach for object detection where we use detections from a preexisting detector to train a new detector. We differ from previous works by coming up with a relative quality metric which involves simpler labeling and by proposing a full framework of automatic generation of improved detectors. To validate our method, we collected a comprehensive dataset of more than two thousand hours of streaming from public traffic cameras that contemplates variations in time, location and weather. We used these data to generate and assess with weak labeling a car detector that outperforms popular detectors on hard situations such as rainy weather and low resolution images. Experimental results are reported, thus corroborating the relevance of the proposed approach.
在这项工作中,我们提出了一种用于对象检测的半监督学习方法,其中我们使用来自先前存在的检测器的检测来训练新的检测器。我们与以前的工作不同,提出了一个相对的质量度量,它涉及更简单的标记,并提出了一个自动生成改进检测器的完整框架。为了验证我们的方法,我们收集了一个综合的数据集,其中包括2000多小时的公共交通摄像头流媒体,考虑了时间、地点和天气的变化。我们使用这些数据生成并使用弱标记来评估汽车检测器,该检测器在诸如雨天和低分辨率图像等恶劣情况下优于流行检测器。实验结果的报告,从而证实了所提出的方法的相关性。
{"title":"A NOVEL SEMI-SUPERVISED DETECTION APPROACH WITH WEAK ANNOTATION","authors":"Eric K. Tokuda, Gabriel B. A. Ferreira, Cláudio T. Silva, R. M. C. Junior","doi":"10.1109/SSIAI.2018.8470307","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470307","url":null,"abstract":"In this work we propose a semi-supervised learning approach for object detection where we use detections from a preexisting detector to train a new detector. We differ from previous works by coming up with a relative quality metric which involves simpler labeling and by proposing a full framework of automatic generation of improved detectors. To validate our method, we collected a comprehensive dataset of more than two thousand hours of streaming from public traffic cameras that contemplates variations in time, location and weather. We used these data to generate and assess with weak labeling a car detector that outperforms popular detectors on hard situations such as rainy weather and low resolution images. Experimental results are reported, thus corroborating the relevance of the proposed approach.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133373325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Artifact Detection Maps Learned using Shallow Convolutional Networks 使用浅卷积网络学习的伪影检测地图
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470369
T. Goodall, A. Bovik
Automatically identifying the locations and severities of video artifacts is a difficult problem. We have developed a general method for detecting local artifacts by learning differences between distorted and pristine video frames. Our model, which we call the Video Impairment Mapper (VID-MAP), produces a full resolution map of artifact detection probabilities based on comparisons of exitatory and inhibatory convolutional responses. Validation on a large database shows that our method outperforms the previous state-of-the-art. A software release of VID-MAP that was trained to produce upscaling and combing detection probability maps is available online: http://live.ece.utexas.edu/research/quality/VIDMAP release.zip for public use and evaluation.
自动识别视频伪影的位置和严重程度是一个难题。我们开发了一种通过学习扭曲和原始视频帧之间的差异来检测局部伪影的通用方法。我们的模型,我们称之为视频损伤映射器(VID-MAP),基于兴奋性和抑制性卷积响应的比较,生成伪信号检测概率的全分辨率地图。在大型数据库上的验证表明,我们的方法优于以前的最先进的方法。VID-MAP的软件版本经过培训,可以制作升级和梳理检测概率图:http://live.ece.utexas.edu/research/quality/VIDMAP release.zip,供公众使用和评估。
{"title":"Artifact Detection Maps Learned using Shallow Convolutional Networks","authors":"T. Goodall, A. Bovik","doi":"10.1109/SSIAI.2018.8470369","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470369","url":null,"abstract":"Automatically identifying the locations and severities of video artifacts is a difficult problem. We have developed a general method for detecting local artifacts by learning differences between distorted and pristine video frames. Our model, which we call the Video Impairment Mapper (VID-MAP), produces a full resolution map of artifact detection probabilities based on comparisons of exitatory and inhibatory convolutional responses. Validation on a large database shows that our method outperforms the previous state-of-the-art. A software release of VID-MAP that was trained to produce upscaling and combing detection probability maps is available online: http://live.ece.utexas.edu/research/quality/VIDMAP release.zip for public use and evaluation.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127049525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Scene Statistics for Noise Estimation 自然场景统计噪声估计
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470313
Praful Gupta, C. Bampis, Yize Jin, A. Bovik
We investigate the scale-invariant properties of divisively normalized bandpass responses of natural images in the DCT-filtered domain. We found that the variance of the normalized DCT filtered responses of a pristine natural image is scale invariant. This scale invariance property does not hold in the presence of noise and thus it can be used to devise an efficient blind image noise estimator. The proposed noise estimation approach outperforms other statistics-based methods especially for higher noise levels and competes well with patch-based and filter-based approaches. Moreover, the new variance estimation approach is also effective in the case of non-Gaussian noise. The research code of the proposed algorithm can be found at https://github.com/guptapraful/Noise Estimation.
我们研究了自然图像在dct滤波域的分归一化带通响应的尺度不变性质。我们发现原始自然图像的归一化DCT滤波响应的方差是尺度不变的。这种尺度不变性在存在噪声的情况下不成立,因此可以用来设计一种有效的盲图像噪声估计器。提出的噪声估计方法优于其他基于统计的方法,特别是在高噪声水平下,并与基于补丁和基于滤波器的方法竞争。此外,新的方差估计方法在非高斯噪声的情况下也很有效。提出的算法的研究代码可以在https://github.com/guptapraful/Noise估计中找到。
{"title":"Natural Scene Statistics for Noise Estimation","authors":"Praful Gupta, C. Bampis, Yize Jin, A. Bovik","doi":"10.1109/SSIAI.2018.8470313","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470313","url":null,"abstract":"We investigate the scale-invariant properties of divisively normalized bandpass responses of natural images in the DCT-filtered domain. We found that the variance of the normalized DCT filtered responses of a pristine natural image is scale invariant. This scale invariance property does not hold in the presence of noise and thus it can be used to devise an efficient blind image noise estimator. The proposed noise estimation approach outperforms other statistics-based methods especially for higher noise levels and competes well with patch-based and filter-based approaches. Moreover, the new variance estimation approach is also effective in the case of non-Gaussian noise. The research code of the proposed algorithm can be found at https://github.com/guptapraful/Noise Estimation.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131601541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Robust Head Detection in Collaborative Learning Environments Using AM-FM Representations 协同学习环境中基于AM-FM表征的鲁棒头部检测
Pub Date : 2018-04-08 DOI: 10.1109/SSIAI.2018.8470355
Wenjing Shi, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva
The paper introduces the problem of robust head detection in collaborative learning environments. In such environments, the camera remains fixed while the students are allowed to sit at different parts of a table. Example challenges include the fact that students may be facing away from the camera or exposing different parts of their face to the camera. To address these issues, the paper proposes the development of two new methods based on Amplitude Modulation-Frequency Modulation (AM-FM) models. First, a combined approach based on color and FM texture is developed for robust face detection. Secondly, a combined approach based on processing the AM and FM components is developed for robust, back of the head detection. The results of the two approaches are also combined to detect all of the students sitting at each table. The robust face detector achieved 79% accuracy on a set of 1000 face image examples. The back of the head detector achieved 91% accuracy on a set of 363 test image examples.
本文介绍了协作学习环境下的鲁棒头部检测问题。在这样的环境中,当学生被允许坐在桌子的不同位置时,摄像机保持固定。挑战的例子包括学生可能背对着镜头,或者把脸的不同部分暴露在镜头前。为了解决这些问题,本文提出了两种基于调幅-调频(AM-FM)模型的新方法。首先,提出了一种基于颜色和调频纹理的鲁棒人脸检测方法。其次,提出了一种基于调幅和调频分量处理的鲁棒后脑检测方法。这两种方法的结果也被结合起来,以检测坐在每张桌子旁的所有学生。鲁棒人脸检测器在1000张人脸图像样本上达到了79%的准确率。在一组363个测试图像样本上,头部后部检测器的准确率达到91%。
{"title":"Robust Head Detection in Collaborative Learning Environments Using AM-FM Representations","authors":"Wenjing Shi, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva","doi":"10.1109/SSIAI.2018.8470355","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470355","url":null,"abstract":"The paper introduces the problem of robust head detection in collaborative learning environments. In such environments, the camera remains fixed while the students are allowed to sit at different parts of a table. Example challenges include the fact that students may be facing away from the camera or exposing different parts of their face to the camera. To address these issues, the paper proposes the development of two new methods based on Amplitude Modulation-Frequency Modulation (AM-FM) models. First, a combined approach based on color and FM texture is developed for robust face detection. Secondly, a combined approach based on processing the AM and FM components is developed for robust, back of the head detection. The results of the two approaches are also combined to detect all of the students sitting at each table. The robust face detector achieved 79% accuracy on a set of 1000 face image examples. The back of the head detector achieved 91% accuracy on a set of 363 test image examples.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1