Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge[26].
{"title":"Enhanced Deep Residual Networks for Single Image Super-Resolution","authors":"Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee","doi":"10.1109/CVPRW.2017.151","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.151","url":null,"abstract":"Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge[26].","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"2 1","pages":"1132-1140"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79424971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bozkurt, Trevor Gale, Kivanç Köse, C. Alessi-Fox, D. Brooks, M. Rajadhyaksha, Jennifer G. Dy
Reflectance confocal microscopy (RCM) is an effective, non-invasive pre-screening tool for cancer diagnosis. However, acquiring and reading RCM images requires extensive training and experience, and novice clinicians exhibit high variance in diagnostic accuracy. Consequently, there is a compelling need for quantitative tools to standardize image acquisition and analysis. In this study, we use deep recurrent convolutional neural networks to delineate skin strata in stacks of RCM images collected at consecutive depths. To perform diagnostic analysis, clinicians collect RCM images at 4-5 specific layers in the tissue. Our model automates this process by discriminating between RCM images of different layers. Testing our model on an expert labeled dataset of 504 RCM stacks, we achieve 87.97% classification accuracy, and a 9-fold reduction in the number of anatomically impossible errors compared to the previous state-of-the-art.
{"title":"Delineation of Skin Strata in Reflectance Confocal Microscopy Images with Recurrent Convolutional Networks","authors":"A. Bozkurt, Trevor Gale, Kivanç Köse, C. Alessi-Fox, D. Brooks, M. Rajadhyaksha, Jennifer G. Dy","doi":"10.1109/CVPRW.2017.108","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.108","url":null,"abstract":"Reflectance confocal microscopy (RCM) is an effective, non-invasive pre-screening tool for cancer diagnosis. However, acquiring and reading RCM images requires extensive training and experience, and novice clinicians exhibit high variance in diagnostic accuracy. Consequently, there is a compelling need for quantitative tools to standardize image acquisition and analysis. In this study, we use deep recurrent convolutional neural networks to delineate skin strata in stacks of RCM images collected at consecutive depths. To perform diagnostic analysis, clinicians collect RCM images at 4-5 specific layers in the tissue. Our model automates this process by discriminating between RCM images of different layers. Testing our model on an expert labeled dataset of 504 RCM stacks, we achieve 87.97% classification accuracy, and a 9-fold reduction in the number of anatomically impossible errors compared to the previous state-of-the-art.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"34 1","pages":"777-785"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75421725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel domain adaptation method for deep learning that combines adaptive batch normalization to produce a common feature-space between domains and label transfer with subspace alignment on deep features. The first step of our method automatically conditions the features from the source/target domain to have similar statistical distributions by normalizing the activations in each layer of our network using adaptive batch normalization. We then examine the clustering properties of the normalized features on a manifold to determine if the target features are well suited for the second of our algorithm, label-transfer. The second step of our method performs subspace alignment and k-means clustering on the feature manifold to transfer labels from the closest source cluster to each target cluster. The proposed manifold guided label transfer methods produce state of the art results for deep adaptation on several standard digit recognition datasets.
{"title":"Manifold Guided Label Transfer for Deep Domain Adaptation","authors":"Breton L. Minnehan, A. Savakis","doi":"10.1109/CVPRW.2017.104","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.104","url":null,"abstract":"We propose a novel domain adaptation method for deep learning that combines adaptive batch normalization to produce a common feature-space between domains and label transfer with subspace alignment on deep features. The first step of our method automatically conditions the features from the source/target domain to have similar statistical distributions by normalizing the activations in each layer of our network using adaptive batch normalization. We then examine the clustering properties of the normalized features on a manifold to determine if the target features are well suited for the second of our algorithm, label-transfer. The second step of our method performs subspace alignment and k-means clustering on the feature manifold to transfer labels from the closest source cluster to each target cluster. The proposed manifold guided label transfer methods produce state of the art results for deep adaptation on several standard digit recognition datasets.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"6 1","pages":"744-752"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78430167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shagan Sah, Thang Nguyen, Miguel Domínguez, F. Such, R. Ptucha
Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.
{"title":"Temporally Steered Gaussian Attention for Video Understanding","authors":"Shagan Sah, Thang Nguyen, Miguel Domínguez, F. Such, R. Ptucha","doi":"10.1109/CVPRW.2017.274","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.274","url":null,"abstract":"Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"64 1","pages":"2208-2216"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76098011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun-Jie Huang, Tian-Rui Liu, P. Dragotti, T. Stathaki
Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.
{"title":"SRHRF+: Self-Example Enhanced Single Image Super-Resolution Using Hierarchical Random Forests","authors":"Jun-Jie Huang, Tian-Rui Liu, P. Dragotti, T. Stathaki","doi":"10.1109/CVPRW.2017.144","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.144","url":null,"abstract":"Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"12 1","pages":"1067-1075"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75088054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework from vision based hand movement prediction in a real-world human-robot collaborative scenario for safety guarantee. We first propose a perception submodule that takes in visual data solely and predicts human collaborator's hand movement. Then a robot trajectory adaptive planning submodule is developed that takes the noisy movement prediction signal into consideration for optimization. We first collect a new human manipulation dataset that can supplement the previous publicly available dataset with motion capture data to serve as the ground truth of hand location. We then integrate the algorithm with a robot manipulator that can collaborate with human workers on a set of trained manipulation actions, and it is shown that such a robot system outperforms the one without movement prediction in terms of collision avoidance.
{"title":"Hand Movement Prediction Based Collision-Free Human-Robot Interaction","authors":"Yiwei Wang, Xin Ye, Yezhou Yang, Wenlong Zhang","doi":"10.1109/CVPRW.2017.72","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.72","url":null,"abstract":"We present a framework from vision based hand movement prediction in a real-world human-robot collaborative scenario for safety guarantee. We first propose a perception submodule that takes in visual data solely and predicts human collaborator's hand movement. Then a robot trajectory adaptive planning submodule is developed that takes the noisy movement prediction signal into consideration for optimization. We first collect a new human manipulation dataset that can supplement the previous publicly available dataset with motion capture data to serve as the ground truth of hand location. We then integrate the algorithm with a robot manipulator that can collaborate with human workers on a set of trained manipulation actions, and it is shown that such a robot system outperforms the one without movement prediction in terms of collision avoidance.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"43 1","pages":"492-493"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72657074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Donné, Laurens Meeus, H. Luong, B. Goossens, W. Philips
Stationarity of reconstruction problems is the crux to enabling convolutional neural networks for many image processing tasks: the output estimate for a pixel is generally not dependent on its location within the image but only on its immediate neighbourhood. We expect other invariances, too. For most pixel-processing tasks, rigid transformations should commute with the processing: a rigid transformation of the input should result in that same transformation of the output. In existing literature this is taken into account indirectly by augmenting the training set: reflected and rotated versions of the inputs are also fed to the network when optimizing the network weights. In contrast, we enforce this invariance through the network design. Because of the encompassing nature of the proposed architecture, it can directly enhance existing CNN-based algorithms. We show how it can be applied to SRCNN and FSRCNN both, speeding up convergence in the initial training phase, and improving performance both for pretrained weights and after finetuning.
{"title":"Exploiting Reflectional and Rotational Invariance in Single Image Superresolution","authors":"S. Donné, Laurens Meeus, H. Luong, B. Goossens, W. Philips","doi":"10.1109/CVPRW.2017.141","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.141","url":null,"abstract":"Stationarity of reconstruction problems is the crux to enabling convolutional neural networks for many image processing tasks: the output estimate for a pixel is generally not dependent on its location within the image but only on its immediate neighbourhood. We expect other invariances, too. For most pixel-processing tasks, rigid transformations should commute with the processing: a rigid transformation of the input should result in that same transformation of the output. In existing literature this is taken into account indirectly by augmenting the training set: reflected and rotated versions of the inputs are also fed to the network when optimizing the network weights. In contrast, we enforce this invariance through the network design. Because of the encompassing nature of the proposed architecture, it can directly enhance existing CNN-based algorithms. We show how it can be applied to SRCNN and FSRCNN both, speeding up convergence in the initial training phase, and improving performance both for pretrained weights and after finetuning.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"22 1","pages":"1043-1049"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75124230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bondi, S. Lameri, David Guera, Paolo Bestagini, E. Delp, S. Tubaro
Due to the rapid proliferation of image capturing devices and user-friendly editing software suites, image manipulation is at everyone's hand. For this reason, the forensic community has developed a series of techniques to determine image authenticity. In this paper, we propose an algorithm for image tampering detection and localization, leveraging characteristic footprints left on images by different camera models. The rationale behind our algorithm is that all pixels of pristine images should be detected as being shot with a single device. Conversely, if a picture is obtained through image composition, traces of multiple devices can be detected. The proposed algorithm exploits a convolutional neural network (CNN) to extract characteristic camera model features from image patches. These features are then analyzed by means of iterative clustering techniques in order to detect whether an image has been forged, and localize the alien region.
{"title":"Tampering Detection and Localization Through Clustering of Camera-Based CNN Features","authors":"L. Bondi, S. Lameri, David Guera, Paolo Bestagini, E. Delp, S. Tubaro","doi":"10.1109/CVPRW.2017.232","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.232","url":null,"abstract":"Due to the rapid proliferation of image capturing devices and user-friendly editing software suites, image manipulation is at everyone's hand. For this reason, the forensic community has developed a series of techniques to determine image authenticity. In this paper, we propose an algorithm for image tampering detection and localization, leveraging characteristic footprints left on images by different camera models. The rationale behind our algorithm is that all pixels of pristine images should be detected as being shot with a single device. Conversely, if a picture is obtained through image composition, traces of multiple devices can be detected. The proposed algorithm exploits a convolutional neural network (CNN) to extract characteristic camera model features from image patches. These features are then analyzed by means of iterative clustering techniques in order to detect whether an image has been forged, and localize the alien region.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"3 1","pages":"1855-1864"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77348230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Histopathology plays a role as the gold standard in clinic for disease diagnosis. The identification and segmentation of histological structures are the prerequisite to disease diagnosis. With the advent of digital pathology, researchers' attention is attracted by the analysis of digital pathology images. In order to relieve the workload on pathologists, a robust segmentation method is needed in clinic for computer-assisted diagnosis. In this paper, we propose a level set framework to achieve gland image segmentation. The input image is divided into two parts, which contain glands with lumens and glands without lumens, respectively. Our experiments are performed on the clinical datasets of West China Hospital, Sichuan University. The experimental results show that our method can deal with glands without lumens, thus can obtain a better performance.
{"title":"A Level Set Method for Gland Segmentation","authors":"Chen Wang, H. Bu, J. Bao, Chunming Li","doi":"10.1109/CVPRW.2017.120","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.120","url":null,"abstract":"Histopathology plays a role as the gold standard in clinic for disease diagnosis. The identification and segmentation of histological structures are the prerequisite to disease diagnosis. With the advent of digital pathology, researchers' attention is attracted by the analysis of digital pathology images. In order to relieve the workload on pathologists, a robust segmentation method is needed in clinic for computer-assisted diagnosis. In this paper, we propose a level set framework to achieve gland image segmentation. The input image is divided into two parts, which contain glands with lumens and glands without lumens, respectively. Our experiments are performed on the clinical datasets of West China Hospital, Sichuan University. The experimental results show that our method can deal with glands without lumens, thus can obtain a better performance.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"659 1","pages":"865-873"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76843794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Piatkowska, J. Kogler, A. Belbachir, M. Gelautz
Event-based vision, as realized by bio-inspired Dynamic Vision Sensors (DVS), is gaining more and more popularity due to its advantages of high temporal resolution, wide dynamic range and power efficiency at the same time. Potential applications include surveillance, robotics, and autonomous navigation under uncontrolled environment conditions. In this paper, we deal with event-based vision for 3D reconstruction of dynamic scene content by using two stationary DVS in a stereo configuration. We focus on a cooperative stereo approach and suggest an improvement over a previously published algorithm that reduces the measured mean error by over 50 percent. An available ground truth data set for stereo event data is utilized to analyze the algorithm's sensitivity to parameter variation and for comparison with competing techniques.
{"title":"Improved Cooperative Stereo Matching for Dynamic Vision Sensors with Ground Truth Evaluation","authors":"E. Piatkowska, J. Kogler, A. Belbachir, M. Gelautz","doi":"10.1109/CVPRW.2017.51","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.51","url":null,"abstract":"Event-based vision, as realized by bio-inspired Dynamic Vision Sensors (DVS), is gaining more and more popularity due to its advantages of high temporal resolution, wide dynamic range and power efficiency at the same time. Potential applications include surveillance, robotics, and autonomous navigation under uncontrolled environment conditions. In this paper, we deal with event-based vision for 3D reconstruction of dynamic scene content by using two stationary DVS in a stereo configuration. We focus on a cooperative stereo approach and suggest an improvement over a previously published algorithm that reduces the measured mean error by over 50 percent. An available ground truth data set for stereo event data is utilized to analyze the algorithm's sensitivity to parameter variation and for comparison with competing techniques.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"05 1","pages":"370-377"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85850247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}