Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269798
Boyang Gao, E. Dellandréa, Liming Chen
Most of the automated music analysis methods available in the literature rely on the representation of the music through a set of low-level audio features related to temporal and frequential properties. Identifying high-level concepts, such as music mood, from this "black-box" representation is particularly challenging. Therefore we present in this paper a novel music representation that allows gaining an in-depth understanding of the music structure. Its principle is to decompose sparsely the music into a basis of elementary audio elements, called musical words, which represent the notes played by various instruments generated through a MIDI synthesizer. From this representation, a music feature is also proposed to allow automatic music classification. Experiments driven on two music datasets have shown the effectiveness of this approach to represent accurately music signals and to allow efficient classification for the complex problem of music mood classification.
{"title":"Music sparse decomposition onto a MIDI dictionary of musical words and its application to music mood classification","authors":"Boyang Gao, E. Dellandréa, Liming Chen","doi":"10.1109/CBMI.2012.6269798","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269798","url":null,"abstract":"Most of the automated music analysis methods available in the literature rely on the representation of the music through a set of low-level audio features related to temporal and frequential properties. Identifying high-level concepts, such as music mood, from this \"black-box\" representation is particularly challenging. Therefore we present in this paper a novel music representation that allows gaining an in-depth understanding of the music structure. Its principle is to decompose sparsely the music into a basis of elementary audio elements, called musical words, which represent the notes played by various instruments generated through a MIDI synthesizer. From this representation, a music feature is also proposed to allow automatic music classification. Experiments driven on two music datasets have shown the effectiveness of this approach to represent accurately music signals and to allow efficient classification for the complex problem of music mood classification.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125547286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269853
Samuel Kim, P. Georgiou, Shrikanth S. Narayanan
In the problem of classifying unstructured audio signals, we have reported promising results using acoustic topic models assuming that an audio signal consists of latent acoustic topics [1, 2]. In this paper, we introduce a two-step method that consists of performing supervised acoustic topic modeling on audio features followed by a classification process. Experimental results in classifying audio signals with respect to onomatopoeias and semantic labels using the BBC Sound Effects library show that the proposed method can improve the classification accuracy relatively 10~14% against the baseline supervised acoustic topic model. We also show that the proposed method is compatible with different labels so that the topic models can be trained with one set of labels and used to classify another set of labels.
{"title":"Supervised acoustic topic model with a consequent classifier for unstructured audio classification","authors":"Samuel Kim, P. Georgiou, Shrikanth S. Narayanan","doi":"10.1109/CBMI.2012.6269853","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269853","url":null,"abstract":"In the problem of classifying unstructured audio signals, we have reported promising results using acoustic topic models assuming that an audio signal consists of latent acoustic topics [1, 2]. In this paper, we introduce a two-step method that consists of performing supervised acoustic topic modeling on audio features followed by a classification process. Experimental results in classifying audio signals with respect to onomatopoeias and semantic labels using the BBC Sound Effects library show that the proposed method can improve the classification accuracy relatively 10~14% against the baseline supervised acoustic topic model. We also show that the proposed method is compatible with different labels so that the topic models can be trained with one set of labels and used to classify another set of labels.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129485165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269814
Patrice Guyot, J. Pinquier, R. André-Obrecht
This paper presents a new system for water flow detection on real life recordings and its application to medical context. The recognition system is based on an original feature for sound event detection in real life. This feature, called ”spectral cover” shows an interesting behaviour to recognize water flow in a noisy environment. The system is only based on thresholds. It is simple, robust, and can be used on every corpus without training. An experiment is realized with more than 7 hours of videos recorded by a wearable device. Our system obtains good results for the water flow event recognition (F-measure of 66%). A comparison with classical approaches using MFCC or low levels descriptors with GMM classifiers is done to attest the good performance of our system. Adding the spectral cover to low levels descriptors also improve their performance and confirms that this feature is relevant.
{"title":"Water flow detection from a wearable device with a new feature, the spectral cover","authors":"Patrice Guyot, J. Pinquier, R. André-Obrecht","doi":"10.1109/CBMI.2012.6269814","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269814","url":null,"abstract":"This paper presents a new system for water flow detection on real life recordings and its application to medical context. The recognition system is based on an original feature for sound event detection in real life. This feature, called ”spectral cover” shows an interesting behaviour to recognize water flow in a noisy environment. The system is only based on thresholds. It is simple, robust, and can be used on every corpus without training. An experiment is realized with more than 7 hours of videos recorded by a wearable device. Our system obtains good results for the water flow event recognition (F-measure of 66%). A comparison with classical approaches using MFCC or low levels descriptors with GMM classifiers is done to attest the good performance of our system. Adding the spectral cover to low levels descriptors also improve their performance and confirms that this feature is relevant.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269800
Arnau Raventos, F. Tarrés
Shot detection in sports video sequences has been of great interest in the last years. In this paper, a new approach to detect onboard camera shots in compressed Formula 1 video sequences is presented. To that end, and after studying the characteristics of the shot, a technique based in the thresholding comparison between a high motion area and a stationary one has been devised. Efficient computation is achieved by direct decoding of the motion vectors in the MPEG stream. The shot detection process is done through a frame-by-frame hysteresis thresholding analysis. In order to enhance the results, a SVD shot boundary detector is applied. Promising results are presented that show the validity of the approach.
{"title":"Formula 1 onboard camera shot detector using motion activity areas","authors":"Arnau Raventos, F. Tarrés","doi":"10.1109/CBMI.2012.6269800","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269800","url":null,"abstract":"Shot detection in sports video sequences has been of great interest in the last years. In this paper, a new approach to detect onboard camera shots in compressed Formula 1 video sequences is presented. To that end, and after studying the characteristics of the shot, a technique based in the thresholding comparison between a high motion area and a stationary one has been devised. Efficient computation is achieved by direct decoding of the motion vectors in the MPEG stream. The shot detection process is done through a frame-by-frame hysteresis thresholding analysis. In order to enhance the results, a SVD shot boundary detector is applied. Promising results are presented that show the validity of the approach.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122048760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269796
Qian Li, U. Niaz, B. Mérialdo
In image processing, Viola-Jones object detector [1] is one of the most successful and widely used object detectors. A popular implementation used by the community is the one in OpenCV. The detector shows its strong power in detecting faces, but we found it hard to be extended to other kinds of objects. The convergence of the training phase of this algorithm depends a lot on the training data. And the prediction precision stays low. In this paper, we have come up with new ideas to improve its performance for diverse object categories. We incorporated six different types of feature images into the Viola and Jones' framework. The integral image [1] used by the Viola-Jones detector is then computed on these feature images respectively instead of only on the gray image. The stage classifier in Viola-Jones detector is now trained on one of these feature images. We also present a new stopping criterion for the stage training. In addition, we integrate a key points based SVM [2] predictor into the prediction phase to improve the confidence of the detection result.
{"title":"An improved algorithm on Viola-Jones object detector","authors":"Qian Li, U. Niaz, B. Mérialdo","doi":"10.1109/CBMI.2012.6269796","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269796","url":null,"abstract":"In image processing, Viola-Jones object detector [1] is one of the most successful and widely used object detectors. A popular implementation used by the community is the one in OpenCV. The detector shows its strong power in detecting faces, but we found it hard to be extended to other kinds of objects. The convergence of the training phase of this algorithm depends a lot on the training data. And the prediction precision stays low. In this paper, we have come up with new ideas to improve its performance for diverse object categories. We incorporated six different types of feature images into the Viola and Jones' framework. The integral image [1] used by the Viola-Jones detector is then computed on these feature images respectively instead of only on the gray image. The stage classifier in Viola-Jones detector is now trained on one of these feature images. We also present a new stopping criterion for the stage training. In addition, we integrate a key points based SVM [2] predictor into the prediction phase to improve the confidence of the detection result.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"157 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125919843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269839
F. Valente, P. Motlícek
Spoken cultural heritage can present considerably heterogeneous content as tales, stories, recitals, poems, theatrical representations and other form of folk literature. This work investigates the automatic detection and classification of those data type in large spoken audio archives. The corpus used for this study consists of 90 radio broadcast shows collected for preserving a large variety of Swiss French dialects. Given the variability of the language spoken in the recordings, the paper proposes a language-independent system based on structural features obtained using a speaker diarization system and various acoustic/prosodic features. Results reveal that such a system can achieve an F-measure equal to 0.85 (Precision 0.88/Recall 0.84) in retrieving folk literature in those archives. Prosodic features appear more effective and complementary to structural features. Furthermore, the paper investigates whether the same approach can be used to label speech segments into five large classes (Storytelling, Poetry, Theatre, Interviews, Functionals) showing F-measures ranging from 0.52 to 0.88. As last contribution, prosodic features for disambiguating between spoken prose and spoken poetry are investigated. In summary the study shows that simple structural and acoustic/prosodic features can be used to effectively retrieve and label folk literature in broadcast archives.
{"title":"Detecting and labeling folk literature in spoken cultural heritage archives using structural and prosodic features","authors":"F. Valente, P. Motlícek","doi":"10.1109/CBMI.2012.6269839","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269839","url":null,"abstract":"Spoken cultural heritage can present considerably heterogeneous content as tales, stories, recitals, poems, theatrical representations and other form of folk literature. This work investigates the automatic detection and classification of those data type in large spoken audio archives. The corpus used for this study consists of 90 radio broadcast shows collected for preserving a large variety of Swiss French dialects. Given the variability of the language spoken in the recordings, the paper proposes a language-independent system based on structural features obtained using a speaker diarization system and various acoustic/prosodic features. Results reveal that such a system can achieve an F-measure equal to 0.85 (Precision 0.88/Recall 0.84) in retrieving folk literature in those archives. Prosodic features appear more effective and complementary to structural features. Furthermore, the paper investigates whether the same approach can be used to label speech segments into five large classes (Storytelling, Poetry, Theatre, Interviews, Functionals) showing F-measures ranging from 0.52 to 0.88. As last contribution, prosodic features for disambiguating between spoken prose and spoken poetry are investigated. In summary the study shows that simple structural and acoustic/prosodic features can be used to effectively retrieve and label folk literature in broadcast archives.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129015691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269795
Claudio Carpineto, Giovanni Romano, Andrea Bernardini
A large number of studies have investigated the query logs of Web search engines, but there is a lack of analogous studies for multimedia database management systems (MDBMSs) used by professional searchers. In this paper we perform an extensive analysis of the query logs of the RAI multimedia catalogue, both at the query level and at the session level. Based on the observation that a large proportion of the queries returned zero or, conversely, too many hits, we identified three query reformulation strategies to reduce or enlarge the set of results. Our study indicates that the desire of controlling the amount of output may have a relatively limited (moderate-to-little) impact on the user's behavior, while at the same time some counter-intuitive findings suggest a suboptimal utilization of the system. The findings are useful for MDBMS developers and for trainers of professional searchers to improve the performance of interactive searches, and for researchers to conduct further work.
{"title":"Analyzing the behavior of professional video searchers using RAI query logs","authors":"Claudio Carpineto, Giovanni Romano, Andrea Bernardini","doi":"10.1109/CBMI.2012.6269795","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269795","url":null,"abstract":"A large number of studies have investigated the query logs of Web search engines, but there is a lack of analogous studies for multimedia database management systems (MDBMSs) used by professional searchers. In this paper we perform an extensive analysis of the query logs of the RAI multimedia catalogue, both at the query level and at the session level. Based on the observation that a large proportion of the queries returned zero or, conversely, too many hits, we identified three query reformulation strategies to reduce or enlarge the set of results. Our study indicates that the desire of controlling the amount of output may have a relatively limited (moderate-to-little) impact on the user's behavior, while at the same time some counter-intuitive findings suggest a suboptimal utilization of the system. The findings are useful for MDBMS developers and for trainers of professional searchers to improve the performance of interactive searches, and for researchers to conduct further work.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116000742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269837
Abdelkader Hamadi, G. Quénot, P. Mulhem
Context helps to understand the meaning of a word and allows the disambiguation of polysemic terms. Many researches took advantage of this notion in information retrieval. For concept-based video indexing and retrieval, this idea seems a priori valid. One of the major problems is then to provide a definition of the context and to choose the most appropriate methods for using it. Two kinds of contexts were exploited in the past to improve concepts detection: in some works, inter-concepts relations are used as semantic context, where other approaches use the temporal features of videos to improve concepts detection. Results of these works showed that the “temporal” and the “semantic” contexts can improve concept detection. In this work we use the semantic context through an ontology and exploit the efficiency of the temporal context in a “two-layers” re-ranking approach. Experiments conducted on TRECVID 2010 data show that the proposed approach always improves over initial results obtained using either MSVM or KNN classifiers or their late fusion, achieving relative gains between 9% and 33% of the MAP measure.
{"title":"Two-layers re-ranking approach based on contextual information for visual concepts detection in videos","authors":"Abdelkader Hamadi, G. Quénot, P. Mulhem","doi":"10.1109/CBMI.2012.6269837","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269837","url":null,"abstract":"Context helps to understand the meaning of a word and allows the disambiguation of polysemic terms. Many researches took advantage of this notion in information retrieval. For concept-based video indexing and retrieval, this idea seems a priori valid. One of the major problems is then to provide a definition of the context and to choose the most appropriate methods for using it. Two kinds of contexts were exploited in the past to improve concepts detection: in some works, inter-concepts relations are used as semantic context, where other approaches use the temporal features of videos to improve concepts detection. Results of these works showed that the “temporal” and the “semantic” contexts can improve concept detection. In this work we use the semantic context through an ontology and exploit the efficiency of the temporal context in a “two-layers” re-ranking approach. Experiments conducted on TRECVID 2010 data show that the proposed approach always improves over initial results obtained using either MSVM or KNN classifiers or their late fusion, achieving relative gains between 9% and 33% of the MAP measure.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131999537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269840
G. Quellec, M. Lamard, B. Cochener, C. Roux, G. Cazuguel
A novel image characterization based on the wavelet transform is presented in this paper. Previous works on wavelet-based image characterization have focused on adapting a wavelet basis to an image or an image dataset. We propose in this paper to take one step further: images are characterized with all possible wavelet bases, with a given support. A simple image signature based on the standardized moments of the wavelet coefficient distributions is proposed. This signature can be computed for each possible wavelet filter fast. An image signature map is thus obtained. We propose to use this signature map as an image characterization for Content-Based Image Retrieval (CBIR). High retrieval performance was achieved on a medical, a face detection and a texture dataset: a precision at five of 62.5%, 97.8% and 64.0% was obtained for these datasets, respectively.
{"title":"Comprehensive wavelet-based image characterization for Content-Based Image Retrieval","authors":"G. Quellec, M. Lamard, B. Cochener, C. Roux, G. Cazuguel","doi":"10.1109/CBMI.2012.6269840","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269840","url":null,"abstract":"A novel image characterization based on the wavelet transform is presented in this paper. Previous works on wavelet-based image characterization have focused on adapting a wavelet basis to an image or an image dataset. We propose in this paper to take one step further: images are characterized with all possible wavelet bases, with a given support. A simple image signature based on the standardized moments of the wavelet coefficient distributions is proposed. This signature can be computed for each possible wavelet filter fast. An image signature map is thus obtained. We propose to use this signature map as an image characterization for Content-Based Image Retrieval (CBIR). High retrieval performance was achieved on a medical, a face detection and a texture dataset: a precision at five of 62.5%, 97.8% and 64.0% was obtained for these datasets, respectively.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122943560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-27DOI: 10.1109/CBMI.2012.6269844
I. González-Díaz, Carlos E. Baz-Hormigos, Moises Berdonces, F. Díaz-de-María
This paper proposes a probabilistic generative model that concurrently tackles the problems of image retrieval and detection of the region-of-interest (ROI). By introducing a latent variable that classifies the matches as true or false, we specifically focus on the application of geometric constrains to the keypoint matching process and the achievement of robust estimates of the geometric transformation between two images showing the same object. Our experiments in a challenging image retrieval database demonstrate that our approach outperforms the most prevalent approach for geometrically constrained matching, and compares favorably to other state-of-the-art methods. Furthermore, the proposed technique concurrently provides very good segmentations of the region of interest.
{"title":"A generative model for concurrent image retrieval and ROI segmentation","authors":"I. González-Díaz, Carlos E. Baz-Hormigos, Moises Berdonces, F. Díaz-de-María","doi":"10.1109/CBMI.2012.6269844","DOIUrl":"https://doi.org/10.1109/CBMI.2012.6269844","url":null,"abstract":"This paper proposes a probabilistic generative model that concurrently tackles the problems of image retrieval and detection of the region-of-interest (ROI). By introducing a latent variable that classifies the matches as true or false, we specifically focus on the application of geometric constrains to the keypoint matching process and the achievement of robust estimates of the geometric transformation between two images showing the same object. Our experiments in a challenging image retrieval database demonstrate that our approach outperforms the most prevalent approach for geometrically constrained matching, and compares favorably to other state-of-the-art methods. Furthermore, the proposed technique concurrently provides very good segmentations of the region of interest.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}