Pere Gilabert, C. Malagelada, Hagen Wenzek, Jordi Vitrià, S. Seguí
{"title":"Leveraging Embedding Information to Create Video Capsule Endoscopy Datasets","authors":"Pere Gilabert, C. Malagelada, Hagen Wenzek, Jordi Vitrià, S. Seguí","doi":"10.23919/MVA57639.2023.10215919","DOIUrl":null,"url":null,"abstract":"As the field of deep learning continues to expand, it has become increasingly apparent that large volumes of data are needed to train algorithms effectively. This is particularly challenging in the endoscopic capsule field, where obtaining and labeling sufficient data can be expensive and time-consuming. To overcome these challenges, we have developed an automatic method of video selection that uses the diversity of unlabeled videos to identify the most relevant videos for labeling. The findings indicate a significant improvement in performance with the implementation of this new methodology. The system selects relevant and diverse videos, achieving high accuracy in the classification task. This translates to less workload for annotators as they can label fewer videos while maintaining the same accuracy level in the classification task.","PeriodicalId":338734,"journal":{"name":"2023 18th International Conference on Machine Vision and Applications (MVA)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA57639.2023.10215919","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As the field of deep learning continues to expand, it has become increasingly apparent that large volumes of data are needed to train algorithms effectively. This is particularly challenging in the endoscopic capsule field, where obtaining and labeling sufficient data can be expensive and time-consuming. To overcome these challenges, we have developed an automatic method of video selection that uses the diversity of unlabeled videos to identify the most relevant videos for labeling. The findings indicate a significant improvement in performance with the implementation of this new methodology. The system selects relevant and diverse videos, achieving high accuracy in the classification task. This translates to less workload for annotators as they can label fewer videos while maintaining the same accuracy level in the classification task.