M. Rahman, D. You, Matthew S. Simpson, Sameer Kiran Antani, Dina Demner-Fushman, G. Thoma
{"title":"基于视觉感兴趣区域识别与分类的生物医学文章交互式图像检索框架","authors":"M. Rahman, D. You, Matthew S. Simpson, Sameer Kiran Antani, Dina Demner-Fushman, G. Thoma","doi":"10.1109/HISB.2012.18","DOIUrl":null,"url":null,"abstract":"This paper presents an interactive biomedical image retrieval system based on automatic visual region-of-interest (ROI) extraction and classification into visual concepts. In biomedical articles, authors often use annotation markers such as arrows, letters or symbols overlaid on figures and illustrations in the articles to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Our proposed method at first localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or user may interactively mark an ROI. As a result of our method, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely visual to a textual one (cross-modal) or integrate both visual and textual search in a single process (multi-modal) based on utilizing user feedback. The hypothesis, that such approaches would improve biomedical image retrieval, is validated through experiments on a biomedical article dataset of thoracic CT scans from the collection of ImageCLEF'2010 medical retrieval track.","PeriodicalId":375089,"journal":{"name":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Interactive Image Retrieval Framework for Biomedical Articles Based on Visual Region-of- Interest (ROI) Identification and Classification\",\"authors\":\"M. Rahman, D. You, Matthew S. Simpson, Sameer Kiran Antani, Dina Demner-Fushman, G. Thoma\",\"doi\":\"10.1109/HISB.2012.18\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an interactive biomedical image retrieval system based on automatic visual region-of-interest (ROI) extraction and classification into visual concepts. In biomedical articles, authors often use annotation markers such as arrows, letters or symbols overlaid on figures and illustrations in the articles to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Our proposed method at first localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or user may interactively mark an ROI. As a result of our method, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely visual to a textual one (cross-modal) or integrate both visual and textual search in a single process (multi-modal) based on utilizing user feedback. The hypothesis, that such approaches would improve biomedical image retrieval, is validated through experiments on a biomedical article dataset of thoracic CT scans from the collection of ImageCLEF'2010 medical retrieval track.\",\"PeriodicalId\":375089,\"journal\":{\"name\":\"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HISB.2012.18\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HISB.2012.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Interactive Image Retrieval Framework for Biomedical Articles Based on Visual Region-of- Interest (ROI) Identification and Classification
This paper presents an interactive biomedical image retrieval system based on automatic visual region-of-interest (ROI) extraction and classification into visual concepts. In biomedical articles, authors often use annotation markers such as arrows, letters or symbols overlaid on figures and illustrations in the articles to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Our proposed method at first localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or user may interactively mark an ROI. As a result of our method, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely visual to a textual one (cross-modal) or integrate both visual and textual search in a single process (multi-modal) based on utilizing user feedback. The hypothesis, that such approaches would improve biomedical image retrieval, is validated through experiments on a biomedical article dataset of thoracic CT scans from the collection of ImageCLEF'2010 medical retrieval track.