Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586803
M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif
During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.
{"title":"Intravascular Ultrasound image segmentation: A helical active contour method","authors":"M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif","doi":"10.1109/IPTA.2010.5586803","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586803","url":null,"abstract":"During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127709827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586729
Ga-Young Kim, Byoung-Ik Kim, Tae-Wuk Bae, Young-Choon Kim, Sang-Ho Ahn, K. Sohng
In this paper, we implement a reticle seeker missile simulator on MATLAB-SIMULINK to analyze the jamming effect of the spin-scan and conscan reticle seeker. The DIRCM (Directed Infrared Countermeasures) system uses the pulsing flashes of infrared (IR) energy and its frequency and intensity have influence on the missile guidance system. Our simulation results show that jamming effect is indicated significantly when jammer frequency and reticle frequency are similar and present a 3D trajectory of missile motions by jamming.
{"title":"Implementation of a reticle seeker missile simulator for jamming effect analysis","authors":"Ga-Young Kim, Byoung-Ik Kim, Tae-Wuk Bae, Young-Choon Kim, Sang-Ho Ahn, K. Sohng","doi":"10.1109/IPTA.2010.5586729","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586729","url":null,"abstract":"In this paper, we implement a reticle seeker missile simulator on MATLAB-SIMULINK to analyze the jamming effect of the spin-scan and conscan reticle seeker. The DIRCM (Directed Infrared Countermeasures) system uses the pulsing flashes of infrared (IR) energy and its frequency and intensity have influence on the missile guidance system. Our simulation results show that jamming effect is indicated significantly when jammer frequency and reticle frequency are similar and present a 3D trajectory of missile motions by jamming.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586757
Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten
A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.
{"title":"Multi-level visual alphabets","authors":"Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten","doi":"10.1109/IPTA.2010.5586757","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586757","url":null,"abstract":"A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586747
Giap Nguyen, Mickaël Coustaty, J. Ogier
We commonly need to extract the image feature to understand and to realize the processes on it, and for each process we need a suitable feature extraction method. In this paper, we describe a stroke feature extraction method which will used in the lettrine indexing. A lettrine is a letter decorated who appear at the beginning of a chapter or a paragraph in ancient books. It's principally composed of strokes. This method is developed with the goal to characterize for indexing them by content using these particular components. We thus use strokes instead of pixels as elementary component. This study is innovative and first results are interesting. Indexing tests using this method will be made in the NaviDoMass project1.
{"title":"Stroke feature extraction for lettrine indexing","authors":"Giap Nguyen, Mickaël Coustaty, J. Ogier","doi":"10.1109/IPTA.2010.5586747","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586747","url":null,"abstract":"We commonly need to extract the image feature to understand and to realize the processes on it, and for each process we need a suitable feature extraction method. In this paper, we describe a stroke feature extraction method which will used in the lettrine indexing. A lettrine is a letter decorated who appear at the beginning of a chapter or a paragraph in ancient books. It's principally composed of strokes. This method is developed with the goal to characterize for indexing them by content using these particular components. We thus use strokes instead of pixels as elementary component. This study is innovative and first results are interesting. Indexing tests using this method will be made in the NaviDoMass project1.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131857018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586805
ChingShun Lin, Daren Wang
In this paper we propose an image-based biological classification system that can identify different creatures via their sounds. The overall system involves the relative spectral transform-perceptual linear prediction for spectrogram image extraction, cosine similarity measure for feature matching, dynamic Hilbert curve for spectrogram routing, and Gaussian mixture model for 1-D spectrogram classification. As an example of our approach, results for honk, dolphin, and whale classification are presented. This method works well on a wide variety of bio-sounds, especially for the highly self-repeated ones. Applications of this approach include biological signal analysis and spectrogram library establishment.
{"title":"Spectrogram image encoding based on dynamic Hilbert curve routing","authors":"ChingShun Lin, Daren Wang","doi":"10.1109/IPTA.2010.5586805","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586805","url":null,"abstract":"In this paper we propose an image-based biological classification system that can identify different creatures via their sounds. The overall system involves the relative spectral transform-perceptual linear prediction for spectrogram image extraction, cosine similarity measure for feature matching, dynamic Hilbert curve for spectrogram routing, and Gaussian mixture model for 1-D spectrogram classification. As an example of our approach, results for honk, dolphin, and whale classification are presented. This method works well on a wide variety of bio-sounds, especially for the highly self-repeated ones. Applications of this approach include biological signal analysis and spectrogram library establishment.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132638545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586733
H. Al-Muscati, F. Labeau
In this work, a novel implementation of a video transcoder that converts a video sequence encoded with the H.264/AVC standard to a temporally scalable H.264/SVC stream is achieved with the use of a pixel-domain heterogeneous architecture. The input H.264/AVC stream is fully decoded by the transcoder. Macroblock coding modes are extracted from the input stream and are reused to encode the output stream. A set of new motion vectors is computed from the input stream coded motion vectors, and are mapped to either the hierarchical B-frame or zero-delay referencing structures employed by H.264/SVC. These new motion vectors are further subjected to a 3 pixel refinement. As a result, a significant decrease in computational complexity is achieved, while maintaining a close to optimum compression efficiency.
{"title":"Temporal transcoding of H.264/AVC video to the scalable format","authors":"H. Al-Muscati, F. Labeau","doi":"10.1109/IPTA.2010.5586733","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586733","url":null,"abstract":"In this work, a novel implementation of a video transcoder that converts a video sequence encoded with the H.264/AVC standard to a temporally scalable H.264/SVC stream is achieved with the use of a pixel-domain heterogeneous architecture. The input H.264/AVC stream is fully decoded by the transcoder. Macroblock coding modes are extracted from the input stream and are reused to encode the output stream. A set of new motion vectors is computed from the input stream coded motion vectors, and are mapped to either the hierarchical B-frame or zero-delay referencing structures employed by H.264/SVC. These new motion vectors are further subjected to a 3 pixel refinement. As a result, a significant decrease in computational complexity is achieved, while maintaining a close to optimum compression efficiency.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133177486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586725
Changki Min, S. Jin, Hyeongchul Oh, Sang-Jun Park, Jechang Jeong
H.264/AVC is the newest one among several video compression standards. The main goals of H.264/AVC are to achieve efficient compression performance and a network friendly video coding. However, if an error occurs when transmitting compressed video, error concealment is needed to prevent error propagation and to improve the video quality. In this paper, we propose the temporal error concealment algorithm which provides high performance for H.264/AVC. The proposed algorithm uses the property that the motion vectors (MVs) between the error macroblock (MB) and the neighboring MB have high similarity to select a group of candidate MVs, when an error occurs in the inter-coded frame. Next, weighted overlapped boundary matching algorithm using the credibility of information selects the best candidate MV among a group of candidate MVs. The experimental results show that the proposed algorithm improves PSNR up to 3.02 dB compared with the boundary matching algorithm (BMA).
{"title":"Temporal error concealment algorithm for H.264/AVC using omnidirectional motion similarity","authors":"Changki Min, S. Jin, Hyeongchul Oh, Sang-Jun Park, Jechang Jeong","doi":"10.1109/IPTA.2010.5586725","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586725","url":null,"abstract":"H.264/AVC is the newest one among several video compression standards. The main goals of H.264/AVC are to achieve efficient compression performance and a network friendly video coding. However, if an error occurs when transmitting compressed video, error concealment is needed to prevent error propagation and to improve the video quality. In this paper, we propose the temporal error concealment algorithm which provides high performance for H.264/AVC. The proposed algorithm uses the property that the motion vectors (MVs) between the error macroblock (MB) and the neighboring MB have high similarity to select a group of candidate MVs, when an error occurs in the inter-coded frame. Next, weighted overlapped boundary matching algorithm using the credibility of information selects the best candidate MV among a group of candidate MVs. The experimental results show that the proposed algorithm improves PSNR up to 3.02 dB compared with the boundary matching algorithm (BMA).","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114339434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586748
Pierre Boudoin, H. Maaref, S. Otmane, M. Mallem
This paper aims to present all the study done on the SPIDAR, which is a tracking and haptic device, in order to improve its accuracy on the given position. Firstly we proposed a new semi-automatic initialization technique for this device using an optical tracking system. Then, we propose to use Support Vector Regression (SVR) to calibrate the SPIDAR in order to reduce location errors. We obtained very good results with this calibration, since we reduced the mean error by more than 50%.
{"title":"SPIDAR calibration using Support Vector Regression","authors":"Pierre Boudoin, H. Maaref, S. Otmane, M. Mallem","doi":"10.1109/IPTA.2010.5586748","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586748","url":null,"abstract":"This paper aims to present all the study done on the SPIDAR, which is a tracking and haptic device, in order to improve its accuracy on the given position. Firstly we proposed a new semi-automatic initialization technique for this device using an optical tracking system. Then, we propose to use Support Vector Regression (SVR) to calibrate the SPIDAR in order to reduce location errors. We obtained very good results with this calibration, since we reduced the mean error by more than 50%.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125142122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586732
Xueyi Zhao
This paper proposes a novel technique for learning face features based on Bayesian regularized non-negative matrix factorization with Itakura-Saito (IS) divergence (B-NMF). In this paper, we show, the proposed technique not only explicitly incorporates the notion of ‘Bayesian regularized prior’ which imposes onto the features learning but also holds the property of scale invariant that enables lower energy components in the learning process to be treated with equal importance as the high energy components. Real test has been conducted and the obtained results are very encouraging.
{"title":"Bayesian regularized nonnegative matrix factorization based face features learning","authors":"Xueyi Zhao","doi":"10.1109/IPTA.2010.5586732","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586732","url":null,"abstract":"This paper proposes a novel technique for learning face features based on Bayesian regularized non-negative matrix factorization with Itakura-Saito (IS) divergence (B-NMF). In this paper, we show, the proposed technique not only explicitly incorporates the notion of ‘Bayesian regularized prior’ which imposes onto the features learning but also holds the property of scale invariant that enables lower energy components in the learning process to be treated with equal importance as the high energy components. Real test has been conducted and the obtained results are very encouraging.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586760
A. Porebski, N. Vandenbroucke, L. Macaire
In this paper, we propose to compare the performances of two sequential feature selection schemes used for supervised color texture classification. We focus this study on the sequential forward selection (SFS) scheme and the more complex sequential forward floating selection (SFFS) scheme which avoids the “nesting effect”. These schemes retain Haralick features extracted from chromatic co-occurrence matrices of images coded in different color spaces. We experimentally study the contribution of these two feature selection schemes with three benchmark color texture databases.
{"title":"Comparison of feature selection schemes for color texture classification","authors":"A. Porebski, N. Vandenbroucke, L. Macaire","doi":"10.1109/IPTA.2010.5586760","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586760","url":null,"abstract":"In this paper, we propose to compare the performances of two sequential feature selection schemes used for supervised color texture classification. We focus this study on the sequential forward selection (SFS) scheme and the more complex sequential forward floating selection (SFFS) scheme which avoids the “nesting effect”. These schemes retain Haralick features extracted from chromatic co-occurrence matrices of images coded in different color spaces. We experimentally study the contribution of these two feature selection schemes with three benchmark color texture databases.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}