Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627794
C. Park, K. Wilson, A. Howard
Virtual reality (VR) surgical training can be a potentially useful method for improving practicing surgical skills. However, the current literature on VR training has not discussed the efficacy of VR systems that are useful outside of the training facility. As such, the goal of this study is to evaluate the benefits of using a low-cost VR simulation system for providing a method to increase the learning of surgical skills. Our pilot case focuses on laparoscopic cholecystectomy, which is one of the most common surgeries currently performed in the United States and is often used as the training case for laparoscopy due to its high frequency and perceived low risk. The specific aim of this study is to examine the efficacy of a low-cost haptic-based VR surgical simulator on improving practicing surgical skills, measured by the change in the learning effect of students.
{"title":"Examining the learning effects of a low-cost haptic-based virtual reality simulator on laparoscopic cholecystectomy","authors":"C. Park, K. Wilson, A. Howard","doi":"10.1109/CBMS.2013.6627794","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627794","url":null,"abstract":"Virtual reality (VR) surgical training can be a potentially useful method for improving practicing surgical skills. However, the current literature on VR training has not discussed the efficacy of VR systems that are useful outside of the training facility. As such, the goal of this study is to evaluate the benefits of using a low-cost VR simulation system for providing a method to increase the learning of surgical skills. Our pilot case focuses on laparoscopic cholecystectomy, which is one of the most common surgeries currently performed in the United States and is often used as the training case for laparoscopy due to its high frequency and perceived low risk. The specific aim of this study is to examine the efficacy of a low-cost haptic-based VR surgical simulator on improving practicing surgical skills, measured by the change in the learning effect of students.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"32 1","pages":"233-238"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82466966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627841
P. Rodrigues, C. Dias, Diana Rocha, Isabel Boldt, A. Teixeira-Pinto, R. Cruz-Correia
The amount of data currently being produced, stored and used in hospital settings is stressing information technology infrastructure, making clinical reports to be stored in secondary memory devices. The aim of this work was to develop a model that predicts the probability of visualization, within a certain period after production, of each clinical report. We collected log data, from January 2013 till May 2011, from an existing virtual patient record, in a tertiary university hospital in Porto, Portugal, with information on report creation and report first-time visualization dates, along with contextual information. The main factors associated with visualization were defined using logistic regression. These factors were then used as explanatory variables for predicting the probability of a piece of information being accessed after production, using Kaplan-Meier analysis and the Weibull probability distribution. Clinical department, type of encounter and report type were found significantly associated with time-to-visualization and probability of visualization.
{"title":"Predicting visualization of hospital clinical reports using survival analysis of access logs from a virtual patient record","authors":"P. Rodrigues, C. Dias, Diana Rocha, Isabel Boldt, A. Teixeira-Pinto, R. Cruz-Correia","doi":"10.1109/CBMS.2013.6627841","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627841","url":null,"abstract":"The amount of data currently being produced, stored and used in hospital settings is stressing information technology infrastructure, making clinical reports to be stored in secondary memory devices. The aim of this work was to develop a model that predicts the probability of visualization, within a certain period after production, of each clinical report. We collected log data, from January 2013 till May 2011, from an existing virtual patient record, in a tertiary university hospital in Porto, Portugal, with information on report creation and report first-time visualization dates, along with contextual information. The main factors associated with visualization were defined using logistic regression. These factors were then used as explanatory variables for predicting the probability of a piece of information being accessed after production, using Kaplan-Meier analysis and the Weibull probability distribution. Clinical department, type of encounter and report type were found significantly associated with time-to-visualization and probability of visualization.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"8 3 1","pages":"461-464"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82677393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627781
A. H. Razavi, Kambiz Ghazinour
This paper describes our study of the incidence of Personal Health Information (PHI) on the Web. PHI is usually shared under conditions of confidentiality, protection and trust, and should not be disclosed or available to unrelated third parties or the general public. We first analyzed the characteristics that potentially make systems successful in identification of unsolicited or unjustified PHI disclosures. In the next stage, we designed and implemented an integrated Natural Language Processing/Machine Learning (NLP/ML)-based system that detects disclosures of personal health information, specifically according to the above characteristics including detected patterns. This research is regarded as the first step toward a learning system that will be trained based on a limited training set built on the result of the processing chain described in the paper in order to generally detect the PHI disclosures over the web.
{"title":"Personal Health Information detection in unstructured web documents","authors":"A. H. Razavi, Kambiz Ghazinour","doi":"10.1109/CBMS.2013.6627781","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627781","url":null,"abstract":"This paper describes our study of the incidence of Personal Health Information (PHI) on the Web. PHI is usually shared under conditions of confidentiality, protection and trust, and should not be disclosed or available to unrelated third parties or the general public. We first analyzed the characteristics that potentially make systems successful in identification of unsolicited or unjustified PHI disclosures. In the next stage, we designed and implemented an integrated Natural Language Processing/Machine Learning (NLP/ML)-based system that detects disclosures of personal health information, specifically according to the above characteristics including detected patterns. This research is regarded as the first step toward a learning system that will be trained based on a limited training set built on the result of the processing chain described in the paper in order to generally detect the PHI disclosures over the web.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"45 1","pages":"155-160"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88488324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627842
R. Maia, C. Jacob, J. R. Mitchell, A. Hara, Alvin C. Silva, W. Pavlicek
Dual-Energy Computed Tomography (DECT) is a new modality of CT where two images are acquired simultaneously at two energy levels, and then decomposed into two material density images. It is also possible to further decompose these images into volume fraction images that approximate the percentage of a given material at each pixel. Here, we describe a novel parallel version of the multilateral decomposition algorithm proposed by Mendonça et al., which is used to obtain volume fraction images. Our parallel version accelerates decomposition by 200x. We also discuss some of the algorithm limitations.
{"title":"Parallel multi-material decomposition of Dual-Energy CT data","authors":"R. Maia, C. Jacob, J. R. Mitchell, A. Hara, Alvin C. Silva, W. Pavlicek","doi":"10.1109/CBMS.2013.6627842","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627842","url":null,"abstract":"Dual-Energy Computed Tomography (DECT) is a new modality of CT where two images are acquired simultaneously at two energy levels, and then decomposed into two material density images. It is also possible to further decompose these images into volume fraction images that approximate the percentage of a given material at each pixel. Here, we describe a novel parallel version of the multilateral decomposition algorithm proposed by Mendonça et al., which is used to obtain volume fraction images. Our parallel version accelerates decomposition by 200x. We also discuss some of the algorithm limitations.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"2 1","pages":"465-468"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87716659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627812
V. Oikonomou, J. Spilka, C. Stylios, L. Lhotská
Missing data cause serious problem for automatic evaluation of the fetal heart rate(FHR) series. In this work we present an algorithm to surpress this problem. More specifically, an adaptive approach is proposed based on two steps. The first step concerns the reconstruction step where we obtain an estimate of the missing data using an empirical dictionary. The second step consists from the construction of the dictionary using the updated values from the first step. The above two steps are applied iteratively until convergence. The method adapts each time the dictionary and the reconstructed time series to the new information that we gain. Results on real and simulated experiments have shown the usefullness of our approach. More specifically, a comparison with cubic spline interpolation is performed and have shown that the proposed approach achieved 4 to 9dB better reconstruction ability.
{"title":"An adaptive method for the recovery of missing samples from FHR time series","authors":"V. Oikonomou, J. Spilka, C. Stylios, L. Lhotská","doi":"10.1109/CBMS.2013.6627812","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627812","url":null,"abstract":"Missing data cause serious problem for automatic evaluation of the fetal heart rate(FHR) series. In this work we present an algorithm to surpress this problem. More specifically, an adaptive approach is proposed based on two steps. The first step concerns the reconstruction step where we obtain an estimate of the missing data using an empirical dictionary. The second step consists from the construction of the dictionary using the updated values from the first step. The above two steps are applied iteratively until convergence. The method adapts each time the dictionary and the reconstructed time series to the new information that we gain. Results on real and simulated experiments have shown the usefullness of our approach. More specifically, a comparison with cubic spline interpolation is performed and have shown that the proposed approach achieved 4 to 9dB better reconstruction ability.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"56 1","pages":"337-342"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89981184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627779
Woochul Kang, Po-Liang Wu, M. Rahmaniheris, L. Sha, Richard B. Berlin, J. Goldman
Medical devices are increasingly capable of interacting with each other by leveraging network connectivity and interoperability, promising a great benefit for patient safety and effectiveness of medical services. However, ad-hoc integration of medical devices through networking can significantly increase the complexity of the system and make the system more vulnerable to potential errors and safety hazards. In this paper, we address this problem and introduce an organ-centric compositional development approach. In our approach, medical devices are composed into semi-autonomous clusters according to organ-specific physiology in a network-fail-safe manner. Each organ-centric cluster captures common device interaction patterns of sensing and control to support human physiology. The library of these formally verified organ-centric architectural patterns enables rapid and safe composition of supervisory controllers, which are specialized for specific medical scenarios. Using airway-laser surgery as a case study of practical importance, we demonstrate the feasibility of our approach under Simulink's model-driven development framework.
{"title":"Towards organ-centric compositional development of safe networked supervisory medical systems","authors":"Woochul Kang, Po-Liang Wu, M. Rahmaniheris, L. Sha, Richard B. Berlin, J. Goldman","doi":"10.1109/CBMS.2013.6627779","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627779","url":null,"abstract":"Medical devices are increasingly capable of interacting with each other by leveraging network connectivity and interoperability, promising a great benefit for patient safety and effectiveness of medical services. However, ad-hoc integration of medical devices through networking can significantly increase the complexity of the system and make the system more vulnerable to potential errors and safety hazards. In this paper, we address this problem and introduce an organ-centric compositional development approach. In our approach, medical devices are composed into semi-autonomous clusters according to organ-specific physiology in a network-fail-safe manner. Each organ-centric cluster captures common device interaction patterns of sensing and control to support human physiology. The library of these formally verified organ-centric architectural patterns enables rapid and safe composition of supervisory controllers, which are specialized for specific medical scenarios. Using airway-laser surgery as a case study of practical importance, we demonstrate the feasibility of our approach under Simulink's model-driven development framework.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"70 1","pages":"143-148"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88505883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627784
Peng Cao, Dazhe Zhao, Osmar R Zaiane
Many lung nodule computer-aided detection methods have been proposed to help radiologists in their decision making. Because high sensitivity is essential in the candidate identification stage, there are countless false positives produced by the initial suspect nodule generation process, giving more work to radiologists. The difficulty of false positive reduction lies in the variation of the appearances of the potential nodules, and the imbalance distribution between the amount of nodule and non-nodule candidates in the dataset. To solve these challenges, we extend the random subspace method to a novel Cost Sensitive Adaptive Random Subspace ensemble (CSARS), so as to increase the diversity among the components and overcome imbalanced data classification. Experimental results show the effectiveness of the proposed method in terms of G-mean and AUC in comparison with commonly used methods.
{"title":"Cost sensitive adaptive random subspace ensemble for computer-aided nodule detection","authors":"Peng Cao, Dazhe Zhao, Osmar R Zaiane","doi":"10.1109/CBMS.2013.6627784","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627784","url":null,"abstract":"Many lung nodule computer-aided detection methods have been proposed to help radiologists in their decision making. Because high sensitivity is essential in the candidate identification stage, there are countless false positives produced by the initial suspect nodule generation process, giving more work to radiologists. The difficulty of false positive reduction lies in the variation of the appearances of the potential nodules, and the imbalance distribution between the amount of nodule and non-nodule candidates in the dataset. To solve these challenges, we extend the random subspace method to a novel Cost Sensitive Adaptive Random Subspace ensemble (CSARS), so as to increase the diversity among the components and overcome imbalanced data classification. Experimental results show the effectiveness of the proposed method in terms of G-mean and AUC in comparison with commonly used methods.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"47 5 1","pages":"173-178"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87687527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627776
L. Giancardo, T. Karnowski, K. Tobin, F. Mériaudeau, E. Chaum
In recent years, automated retina image analysis (ARIA) algorithms have received increasing interest by the medical imaging analysis community. Particular attention has been given to techniques able to automate the pre-screening of Diabetic Retinopathy (DR) using inexpensive retina fundus cameras. With the growing number of diabetics worldwide, these techniques have the potential benefits of broad-based, inexpensive screening. The contribution of this paper is twofold: first, we propose a straightforward pipeline from microaneurysm (an early sign of DR) detection to automatic classification of DR without employing any additional features; then, we quantify the generalisation ability of the MA detection method by employing synthetic examples and, more importantly, we experiment with two public datasets which consist of more than 1,350 images graded as normal or showing signs of DR. With cross-datasets tests, we obtained results better or comparable to other recent methods. Since our experiments are performed only on publicly available datasets, our results are directly comparable with those of other research groups.
{"title":"Validation of microaneurysm-based diabetic retinopathy screening across retina fundus datasets","authors":"L. Giancardo, T. Karnowski, K. Tobin, F. Mériaudeau, E. Chaum","doi":"10.1109/CBMS.2013.6627776","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627776","url":null,"abstract":"In recent years, automated retina image analysis (ARIA) algorithms have received increasing interest by the medical imaging analysis community. Particular attention has been given to techniques able to automate the pre-screening of Diabetic Retinopathy (DR) using inexpensive retina fundus cameras. With the growing number of diabetics worldwide, these techniques have the potential benefits of broad-based, inexpensive screening. The contribution of this paper is twofold: first, we propose a straightforward pipeline from microaneurysm (an early sign of DR) detection to automatic classification of DR without employing any additional features; then, we quantify the generalisation ability of the MA detection method by employing synthetic examples and, more importantly, we experiment with two public datasets which consist of more than 1,350 images graded as normal or showing signs of DR. With cross-datasets tests, we obtained results better or comparable to other recent methods. Since our experiments are performed only on publicly available datasets, our results are directly comparable with those of other research groups.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"35 1","pages":"125-130"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75161711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627756
Paulo E. S. Barbosa, M. Morais, K. Galdino, Melquisedec Andrade, L. Gomes, F. Moutinho, J. Figueiredo
Medical devices development and validation are difficult activities due to the critical nature of these products, involving risks to the human lives. Moreover, regulatory agencies are increasing the control over companies because of the still huge number of harms caused for several reasons, having software failures as one of the main causes. Thus it is clear that more formal and sophisticated software development techniques should be investigated. In this paper, we show how Petri nets can play the role of a generic framework for architectural decisions for control systems, allowing besides verification/simulation, an important bridge in the requested traceability by regulatory bodies. We claim that it is possible to satisfy traceability from architectural elements to code, test cases, functional and safety requirements and so on. In order to make clear our point, we conducted a case study from a generic infusion pump specification.
{"title":"Towards medical device behavioural validation using Petri nets","authors":"Paulo E. S. Barbosa, M. Morais, K. Galdino, Melquisedec Andrade, L. Gomes, F. Moutinho, J. Figueiredo","doi":"10.1109/CBMS.2013.6627756","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627756","url":null,"abstract":"Medical devices development and validation are difficult activities due to the critical nature of these products, involving risks to the human lives. Moreover, regulatory agencies are increasing the control over companies because of the still huge number of harms caused for several reasons, having software failures as one of the main causes. Thus it is clear that more formal and sophisticated software development techniques should be investigated. In this paper, we show how Petri nets can play the role of a generic framework for architectural decisions for control systems, allowing besides verification/simulation, an important bridge in the requested traceability by regulatory bodies. We claim that it is possible to satisfy traceability from architectural elements to code, test cases, functional and safety requirements and so on. In order to make clear our point, we conducted a case study from a generic infusion pump specification.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"167 1","pages":"4-10"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76052878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CBMS.2013.6627772
Rafael L. Dias, Renato Bueno, M. X. Ribeiro
Content-Based Image Retrieval (CBIR) Systems allow the search of images by similarity employing a numeric representation automatically or semi-automatically obtained from them to perform the search. Nevertheless, the query result does not always bring what the user expected. In this sense, CBIR systems face the semantic gap problem. One way of overcoming this problem is by the addition of diversity in query execution, so that the user can ask the system to return the most varied images regarding some similarity criteria. However, applying diversity on large datasets has a prohibitive computational cost and, moreover, the result often differs from the expected with a resulting subset that has images with high dissimilarity to the query image. In this paper we propose an approach to reduce the computational cost of Content-Based Image Retrieval systems regarding similarity and diversity criteria. The proposed approach employs dataset fractals analysis to estimate a suitable radius for a database subset to perform a similarity query regarding diversity. It selects closer images to the query center and applies the diversity factor to the subset, providing not only a better comprehension of the impact of the diversity factor to the query result, but also an improvement in execution time.
{"title":"Reducing the complexity of k-nearest diverse neighbor queries in medical image datasets through fractal analysis","authors":"Rafael L. Dias, Renato Bueno, M. X. Ribeiro","doi":"10.1109/CBMS.2013.6627772","DOIUrl":"https://doi.org/10.1109/CBMS.2013.6627772","url":null,"abstract":"Content-Based Image Retrieval (CBIR) Systems allow the search of images by similarity employing a numeric representation automatically or semi-automatically obtained from them to perform the search. Nevertheless, the query result does not always bring what the user expected. In this sense, CBIR systems face the semantic gap problem. One way of overcoming this problem is by the addition of diversity in query execution, so that the user can ask the system to return the most varied images regarding some similarity criteria. However, applying diversity on large datasets has a prohibitive computational cost and, moreover, the result often differs from the expected with a resulting subset that has images with high dissimilarity to the query image. In this paper we propose an approach to reduce the computational cost of Content-Based Image Retrieval systems regarding similarity and diversity criteria. The proposed approach employs dataset fractals analysis to estimate a suitable radius for a database subset to perform a similarity query regarding diversity. It selects closer images to the query center and applies the diversity factor to the subset, providing not only a better comprehension of the impact of the diversity factor to the query result, but also an improvement in execution time.","PeriodicalId":20519,"journal":{"name":"Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems","volume":"1 1","pages":"101-106"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85950303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}