Pub Date : 2016-09-28DOI: 10.1109/ICIP.2016.7532353
Tsampikos Kounalakis, N. Boulgouris, G. Triantafyllidis
In this paper we introduce a novel representation for the classification of 3D images. Unlike most current approaches, our representation is not based on a fixed pyramid but adapts to image content and uses image regions instead of rectangular pyramid scales. Image characteristics, such as depth and color, are used for defining regions within images. Multiple region scales are formed in order to construct the proposed pyramid image representation. The proposed method achieves excellent results in comparison to conventional representations.
{"title":"Content-adaptive pyramid representation for 3D object classification","authors":"Tsampikos Kounalakis, N. Boulgouris, G. Triantafyllidis","doi":"10.1109/ICIP.2016.7532353","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532353","url":null,"abstract":"In this paper we introduce a novel representation for the classification of 3D images. Unlike most current approaches, our representation is not based on a fixed pyramid but adapts to image content and uses image regions instead of rectangular pyramid scales. Image characteristics, such as depth and color, are used for defining regions within images. Multiple region scales are formed in order to construct the proposed pyramid image representation. The proposed method achieves excellent results in comparison to conventional representations.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"10 1","pages":"231-235"},"PeriodicalIF":0.0,"publicationDate":"2016-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73120861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-26DOI: 10.1109/ICIP.2016.7532556
É. Puybareau, Hugues Talbot, É. Béquignon, Bruno Louis, G. Pelle, J. Papon, A. Coste, Laurent Najman
As image processing and analysis techniques improve, an increasing number of procedures in bio-medical analyses can be automated. This brings many benefits, e.g improved speed and accuracy, leading to more reliable diagnoses and follow-up, ultimately improving patients outcome. Many automated procedures in bio-medical imaging are well established and typically consist of detecting and counting various types of cells (e.g. blood cells, abnormal cells in Pap smears, and so on). In this article we propose to automate a different and difficult set of measurements, which is conducted on the cilia of people suffering from a variety of respiratory tract diseases. Cilia are slender, microscopic, hair-like structures or organelles that extend from the surface of nearly all mammalian cells. Motile cilia, such as those found in the lungs and respiratory tract, present a periodic beating motion that keep the airways clear of mucus and dirt. In this paper, we propose a fully automated method that computes various measurements regarding the motion of cilia, taken with high-speed video-microscopy. The advantage of our approach is its capacity to automatically compute robust, adaptive and regionalized measurements, i.e. associated with different regions in the image. We validate the robustness of our approach, and illustrate its performance in comparison to the state-of-the-art.
{"title":"Automating the measurement of physiological parameters: A case study in the image analysis of cilia motion","authors":"É. Puybareau, Hugues Talbot, É. Béquignon, Bruno Louis, G. Pelle, J. Papon, A. Coste, Laurent Najman","doi":"10.1109/ICIP.2016.7532556","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532556","url":null,"abstract":"As image processing and analysis techniques improve, an increasing number of procedures in bio-medical analyses can be automated. This brings many benefits, e.g improved speed and accuracy, leading to more reliable diagnoses and follow-up, ultimately improving patients outcome. Many automated procedures in bio-medical imaging are well established and typically consist of detecting and counting various types of cells (e.g. blood cells, abnormal cells in Pap smears, and so on). In this article we propose to automate a different and difficult set of measurements, which is conducted on the cilia of people suffering from a variety of respiratory tract diseases. Cilia are slender, microscopic, hair-like structures or organelles that extend from the surface of nearly all mammalian cells. Motile cilia, such as those found in the lungs and respiratory tract, present a periodic beating motion that keep the airways clear of mucus and dirt. In this paper, we propose a fully automated method that computes various measurements regarding the motion of cilia, taken with high-speed video-microscopy. The advantage of our approach is its capacity to automatically compute robust, adaptive and regionalized measurements, i.e. associated with different regions in the image. We validate the robustness of our approach, and illustrate its performance in comparison to the state-of-the-art.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"32 1","pages":"1240-1244"},"PeriodicalIF":0.0,"publicationDate":"2016-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84209801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532733
Shuo Zheng, M. Antonini, Marco Cagnazzo, L. Guerrieri, M. Kieffer, I. Nemoianu, R. Samy, B. Zhang
This paper considers the Softcast joint source-channel video coding scheme for data transmission over parallel channels with different power constraints and noise characteristics, typical in DSL or PLT channels. To minimize the mean square error at receiver, an optimal precoding matrix design problem has to be solved, which requires the solution of an inverse eigenvalue problem. Such solution is taken from the MIMO channel precoder design literature. Alternative suboptimal precoding matrices are also proposed and analyzed, showing the efficiency of the optimal precoding matrix within Softcast, which provides gains increasing with the encoded video quality.
{"title":"Softcast with per-carrier power-constrained channels","authors":"Shuo Zheng, M. Antonini, Marco Cagnazzo, L. Guerrieri, M. Kieffer, I. Nemoianu, R. Samy, B. Zhang","doi":"10.1109/ICIP.2016.7532733","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532733","url":null,"abstract":"This paper considers the Softcast joint source-channel video coding scheme for data transmission over parallel channels with different power constraints and noise characteristics, typical in DSL or PLT channels. To minimize the mean square error at receiver, an optimal precoding matrix design problem has to be solved, which requires the solution of an inverse eigenvalue problem. Such solution is taken from the MIMO channel precoder design literature. Alternative suboptimal precoding matrices are also proposed and analyzed, showing the efficiency of the optimal precoding matrix within Softcast, which provides gains increasing with the encoded video quality.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"2122-2126"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75640393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532858
N. Guettari, A. Capelle-Laizé, P. Carré
Blind steganalysis techniques are able to detect the presence of secret messages embedded in digital media files, such as images, video, and audio, with an unknown steganography algorithm. This paper present an image steganalysis method based on Evidential K-Nearest Neighbors (EV-knn). Originality of this work is the use of theoretical framework of Belief functions on different subspaces of features vectors. Classifications obtained in subspaces are combined using specific combination function and to provide classification of a given image (cover or stego). The proposed approach is evaluated with the classical nsf5 steganographic method that hides messages in JPEG images. Compared to Ensemble Classifier steganalysis algorithm, the proposed approach significantly increases the performance of classification.
{"title":"Blind image steganalysis based on evidential K-Nearest Neighbors","authors":"N. Guettari, A. Capelle-Laizé, P. Carré","doi":"10.1109/ICIP.2016.7532858","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532858","url":null,"abstract":"Blind steganalysis techniques are able to detect the presence of secret messages embedded in digital media files, such as images, video, and audio, with an unknown steganography algorithm. This paper present an image steganalysis method based on Evidential K-Nearest Neighbors (EV-knn). Originality of this work is the use of theoretical framework of Belief functions on different subspaces of features vectors. Classifications obtained in subspaces are combined using specific combination function and to provide classification of a given image (cover or stego). The proposed approach is evaluated with the classical nsf5 steganographic method that hides messages in JPEG images. Compared to Ensemble Classifier steganalysis algorithm, the proposed approach significantly increases the performance of classification.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"47 1","pages":"2742-2746"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87620905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7533008
H. L. D. Micheaux, C. Ducottet, P. Frey
Multi-object tracking is a difficult problem underlying many computer vision applications. In this work, we focus on sediment transport experiments in a flow were sediments are represented by spherical calibrated beads. The aim is to track all beads over long time sequences to obtain sediment velocities and concentration. Classical algorithms used in fluid mechanics fail to track the beads over long sequences with a high precision because they incorrectly handle both miss-detections and detector imprecision. Our contribution is to propose a particle filter-based algorithm including an adapted multiple motion model. Additionally, this algorithm integrates several improvements to account for the lack of precision of the detector. The evaluation was made using a test sequence with a dedicated ground-truth. The results show that the method outperforms state-of-the-art concurrent algorithms.
{"title":"Online multi-model particle filter-based tracking to study bedload transport","authors":"H. L. D. Micheaux, C. Ducottet, P. Frey","doi":"10.1109/ICIP.2016.7533008","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533008","url":null,"abstract":"Multi-object tracking is a difficult problem underlying many computer vision applications. In this work, we focus on sediment transport experiments in a flow were sediments are represented by spherical calibrated beads. The aim is to track all beads over long time sequences to obtain sediment velocities and concentration. Classical algorithms used in fluid mechanics fail to track the beads over long sequences with a high precision because they incorrectly handle both miss-detections and detector imprecision. Our contribution is to propose a particle filter-based algorithm including an adapted multiple motion model. Additionally, this algorithm integrates several improvements to account for the lack of precision of the detector. The evaluation was made using a test sequence with a dedicated ground-truth. The results show that the method outperforms state-of-the-art concurrent algorithms.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"103 1","pages":"3489-3493"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81808468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532823
T. Chabardès, P. Dokládal, M. Faessel, M. Bilodeau
The watershed transform is a powerful tool for morphological segmentation. Most common implementations of this method involve a strict hierarchy on gray tones in processing the pixels composing an image. Those dependencies complexify the efficient use of modern computational architectures. This paper aims at answering this problem by introducing a new way of simulating the waterflood that alleviates the sequential nature of hierachical queue propagation. Simultaneous and disorderly growth is made possible using this method. higher speed is reached and bigger data volume can be processed. Experimental results show that the algorithm is accurate and produces a thin, well centered watershed line.
{"title":"A parallel, O(N) algorithm for unbiased, thin watershed","authors":"T. Chabardès, P. Dokládal, M. Faessel, M. Bilodeau","doi":"10.1109/ICIP.2016.7532823","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532823","url":null,"abstract":"The watershed transform is a powerful tool for morphological segmentation. Most common implementations of this method involve a strict hierarchy on gray tones in processing the pixels composing an image. Those dependencies complexify the efficient use of modern computational architectures. This paper aims at answering this problem by introducing a new way of simulating the waterflood that alleviates the sequential nature of hierachical queue propagation. Simultaneous and disorderly growth is made possible using this method. higher speed is reached and bigger data volume can be processed. Experimental results show that the algorithm is accurate and produces a thin, well centered watershed line.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"357 1","pages":"2569-2573"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76400773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532509
Anass Nouri, C. Charrier, O. Lézoray
We propose in this paper a novel perceptual viewpoint-independent metric for the quality assessment of 3D meshes. This full-reference objective metric relies on the method proposed by Wang et al. [1] that compares the structural informations between an original signal and a distorted one. In order to extract the structural informations of a 3D mesh, we use a multi-scale visual saliency map on which we compute the local statistics. The experimental results attest the strong correlation between the objective scores provided by our metric and the human judgments. Also, comparisons with the state-of-the-art prove that our metric is very competitive.
{"title":"Full-reference saliency-based 3D mesh quality assessment index","authors":"Anass Nouri, C. Charrier, O. Lézoray","doi":"10.1109/ICIP.2016.7532509","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532509","url":null,"abstract":"We propose in this paper a novel perceptual viewpoint-independent metric for the quality assessment of 3D meshes. This full-reference objective metric relies on the method proposed by Wang et al. [1] that compares the structural informations between an original signal and a distorted one. In order to extract the structural informations of a 3D mesh, we use a multi-scale visual saliency map on which we compute the local statistics. The experimental results attest the strong correlation between the objective scores provided by our metric and the human judgments. Also, comparisons with the state-of-the-art prove that our metric is very competitive.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"29 1","pages":"1007-1011"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76879649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7533185
Xavier Bouyssounouse, A. Nefian, A. Thomas, L. Edwards, M. Deans, T. Fong
Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.
{"title":"Horizon based orientation estimation for planetary surface navigation","authors":"Xavier Bouyssounouse, A. Nefian, A. Thomas, L. Edwards, M. Deans, T. Fong","doi":"10.1109/ICIP.2016.7533185","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533185","url":null,"abstract":"Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"38 1","pages":"4368-4372"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74033796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532949
Sara Cadoni, É. Chouzenoux, J. Pesquet, C. Chaux
In the field of 3D image recovery, huge amounts of data need to be processed. Parallel optimization methods are then of main interest since they allow to overcome memory limitation issues, while benefiting from the intrinsic acceleration provided by recent multicore computing architectures. In this context, we propose a Block Parallel Majorize-Minimize Memory Gradient (BP3MG) algorithm for solving large scale optimization problems. This algorithm combines a block coordinate strategy with an efficient parallel update. The proposed method is applied to a 3D microscopy image restoration problem involving a depth-variant blur, where it is shown to lead to significant computational time savings with respect to a sequential approach.
{"title":"A block parallel majorize-minimize memory gradient algorithm","authors":"Sara Cadoni, É. Chouzenoux, J. Pesquet, C. Chaux","doi":"10.1109/ICIP.2016.7532949","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532949","url":null,"abstract":"In the field of 3D image recovery, huge amounts of data need to be processed. Parallel optimization methods are then of main interest since they allow to overcome memory limitation issues, while benefiting from the intrinsic acceleration provided by recent multicore computing architectures. In this context, we propose a Block Parallel Majorize-Minimize Memory Gradient (BP3MG) algorithm for solving large scale optimization problems. This algorithm combines a block coordinate strategy with an efficient parallel update. The proposed method is applied to a 3D microscopy image restoration problem involving a depth-variant blur, where it is shown to lead to significant computational time savings with respect to a sequential approach.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"112 1","pages":"3194-3198"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79398971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-25DOI: 10.1109/ICIP.2016.7532753
Yang Xu, Zenbin Wu, Zhihui Wei, M. Mura, J. Chanussot, A. Bertozzi
Thanks to the fast development of sensors, it is now possible to acquire sequences of hyperspectral images. Those hyperspectral video sequences (HVS) are particularly suited for the detection and tracking of chemical gas plumes. In this paper, we present a novel gas plume detection method. It is based on the decomposition of the sequence into a low-rank and a sparse term, corresponding to the background and the plume, respectively, and incorporating temporal consistency. To introduce spatial continuity, a post processing is added using the Total Variation (TV) regularized model. Experimental results on real hyperspectral video sequences validate the effectiveness of the proposed method.
{"title":"GAS plume detection in hyperspectral video sequence using low rank representation","authors":"Yang Xu, Zenbin Wu, Zhihui Wei, M. Mura, J. Chanussot, A. Bertozzi","doi":"10.1109/ICIP.2016.7532753","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532753","url":null,"abstract":"Thanks to the fast development of sensors, it is now possible to acquire sequences of hyperspectral images. Those hyperspectral video sequences (HVS) are particularly suited for the detection and tracking of chemical gas plumes. In this paper, we present a novel gas plume detection method. It is based on the decomposition of the sequence into a low-rank and a sparse term, corresponding to the background and the plume, respectively, and incorporating temporal consistency. To introduce spatial continuity, a post processing is added using the Total Variation (TV) regularized model. Experimental results on real hyperspectral video sequences validate the effectiveness of the proposed method.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"70 1","pages":"2221-2225"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86878465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}