Pub Date : 2024-11-07DOI: 10.1007/s11548-024-03271-3
Gerlig Widmann, Johannes Deeg, Andreas Frech, Josef Klocker, Gudrun Feuchtner, Martin Freund
{"title":"Correction to: Micro-robotic percutaneous targeting of type II endoleaks in the angio-suite.","authors":"Gerlig Widmann, Johannes Deeg, Andreas Frech, Josef Klocker, Gudrun Feuchtner, Martin Freund","doi":"10.1007/s11548-024-03271-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03271-3","url":null,"abstract":"","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1007/s11548-024-03287-9
Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin
Purpose: Observer-based scoring systems, or automatic methods, based on features or kinematic data analysis, are used to perform surgical skill assessments. These methods have several limitations, observer-based ones are subjective, and the automatic ones mainly focus on technical skills or use data strongly related to technical skills to assess non-technical skills. In this study, we are exploring the use of heart-rate data, a non-technical-related data, to predict values of an observer-based scoring system thanks to random forest regressors.
Methods: Heart-rate data from 35 junior resident orthopedic surgeons were collected during the evaluation of a meniscectomy performed on a bench-top simulator. Each participant has been evaluated by two assessors using the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. A preprocessing stage on heart-rate data, composed of threshold filtering and a detrending method, was considered before extracting 41 features. Then a random forest regressor has been optimized thanks to a randomized search cross-validation strategy to predict each score component.
Results: The prediction of the partially non-technical-related components presents promising results, with the best result obtained for the safety component with a mean absolute error of 0.24, which represents a mean absolute percentage error of 5.76%. The analysis of feature important allowed us to determine which features are the more related to each ASSET component, and therefore determine the underlying impact of the sympathetic and parasympathetic nervous systems.
Conclusion: In this preliminary work, a random forest regressor train on feature extract from heart-rate data could be used for automatic skill assessment and more especially for the partially non-technical-related components. Combined with more traditional data, such as kinematic data, it could help to perform accurate automatic skill assessment.
{"title":"Automated assessment of non-technical skills by heart-rate data.","authors":"Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin","doi":"10.1007/s11548-024-03287-9","DOIUrl":"https://doi.org/10.1007/s11548-024-03287-9","url":null,"abstract":"<p><strong>Purpose: </strong>Observer-based scoring systems, or automatic methods, based on features or kinematic data analysis, are used to perform surgical skill assessments. These methods have several limitations, observer-based ones are subjective, and the automatic ones mainly focus on technical skills or use data strongly related to technical skills to assess non-technical skills. In this study, we are exploring the use of heart-rate data, a non-technical-related data, to predict values of an observer-based scoring system thanks to random forest regressors.</p><p><strong>Methods: </strong>Heart-rate data from 35 junior resident orthopedic surgeons were collected during the evaluation of a meniscectomy performed on a bench-top simulator. Each participant has been evaluated by two assessors using the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. A preprocessing stage on heart-rate data, composed of threshold filtering and a detrending method, was considered before extracting 41 features. Then a random forest regressor has been optimized thanks to a randomized search cross-validation strategy to predict each score component.</p><p><strong>Results: </strong>The prediction of the partially non-technical-related components presents promising results, with the best result obtained for the safety component with a mean absolute error of 0.24, which represents a mean absolute percentage error of 5.76%. The analysis of feature important allowed us to determine which features are the more related to each ASSET component, and therefore determine the underlying impact of the sympathetic and parasympathetic nervous systems.</p><p><strong>Conclusion: </strong>In this preliminary work, a random forest regressor train on feature extract from heart-rate data could be used for automatic skill assessment and more especially for the partially non-technical-related components. Combined with more traditional data, such as kinematic data, it could help to perform accurate automatic skill assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Lower-limb muscle mass reduction and fatty degeneration develop in patients with knee osteoarthritis (KOA) and could affect their symptoms, satisfaction, expectation and functional activities. The Knee Society Scoring System (KSS) includes patient reported outcome measures, which is widely used to evaluate the status of knee function of KOA. This study aimed to clarify how muscle mass and fatty degeneration of the lower limb correlate with the KSS in patients with KOA.
Methods: This study included 43 patients with end-stage KOA, including nine males and 34 females. Computed tomography (CT) images of the lower limb obtained for the planning of total knee arthroplasty were utilized. Ten muscle groups were segmented using our artificial-intelligence-based methods. Muscle volume was standardized by dividing by their height squared. The mean CT value for each muscle group was calculated as an index of fatty degeneration. Bivariate analysis between muscle volume or CT values and KSS was performed using Spearman's rank correlation test. Multiple regression analysis was performed, and statistical significance was set at p < 0.05.
Results: Bivariate analysis showed that the functional activity score was significantly correlated with the mean CT value of all muscle groups except the adductors and iliopsoas. Multiple regression analysis revealed that the functional activities score was significantly associated with the mean CT values of the gluteus medius and minimus muscles and the anterior and lateral compartments of the lower leg (β = 0.42, p = 0.01; β = 0.33, p = 0.038; and β = 0.37, p = 0.014, respectively).
Conclusion: Fatty degeneration, rather than muscle mass, in the lower-limb muscles was significantly associated with functional activities score of the KSS in patients with end-stage KOA. Notably, the gluteus medius and minimus and the anterior and lateral compartments of the lower leg are important muscles associated with functional activities.
{"title":"Artificial intelligence-based analysis of lower limb muscle mass and fatty degeneration in patients with knee osteoarthritis and its correlation with Knee Society Score.","authors":"Kohei Kono, Tomofumi Kinoshita, Mazen Soufi, Yoshito Otake, Yuto Masaki, Keisuke Uemura, Tatsuhiko Kutsuna, Kazunori Hino, Takuma Miyamoto, Yasuhito Tanaka, Yoshinobu Sato, Masaki Takao","doi":"10.1007/s11548-024-03284-y","DOIUrl":"https://doi.org/10.1007/s11548-024-03284-y","url":null,"abstract":"<p><strong>Purpose: </strong>Lower-limb muscle mass reduction and fatty degeneration develop in patients with knee osteoarthritis (KOA) and could affect their symptoms, satisfaction, expectation and functional activities. The Knee Society Scoring System (KSS) includes patient reported outcome measures, which is widely used to evaluate the status of knee function of KOA. This study aimed to clarify how muscle mass and fatty degeneration of the lower limb correlate with the KSS in patients with KOA.</p><p><strong>Methods: </strong>This study included 43 patients with end-stage KOA, including nine males and 34 females. Computed tomography (CT) images of the lower limb obtained for the planning of total knee arthroplasty were utilized. Ten muscle groups were segmented using our artificial-intelligence-based methods. Muscle volume was standardized by dividing by their height squared. The mean CT value for each muscle group was calculated as an index of fatty degeneration. Bivariate analysis between muscle volume or CT values and KSS was performed using Spearman's rank correlation test. Multiple regression analysis was performed, and statistical significance was set at p < 0.05.</p><p><strong>Results: </strong>Bivariate analysis showed that the functional activity score was significantly correlated with the mean CT value of all muscle groups except the adductors and iliopsoas. Multiple regression analysis revealed that the functional activities score was significantly associated with the mean CT values of the gluteus medius and minimus muscles and the anterior and lateral compartments of the lower leg (β = 0.42, p = 0.01; β = 0.33, p = 0.038; and β = 0.37, p = 0.014, respectively).</p><p><strong>Conclusion: </strong>Fatty degeneration, rather than muscle mass, in the lower-limb muscles was significantly associated with functional activities score of the KSS in patients with end-stage KOA. Notably, the gluteus medius and minimus and the anterior and lateral compartments of the lower leg are important muscles associated with functional activities.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-06DOI: 10.1007/s11548-024-03206-y
T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu
Purpose: This research aimed to develop an innovative method for designing and fabricating nasal prostheses that reduces anaplastologist expertise dependency while maintaining quality and appearance, allowing patients to regain their normal facial appearance.
Methods: The method involved statistical shape modeling using a morphable face model and 3D data acquired through optical scanning or CT. An automated design process generated patient-specific fits and appearances using regular prosthesis materials and 3D printing of molds. Manual input was required for specific case-related details.
Results: The developed method met all predefined requirements, replacing analog impression-making and offering compatibility with various data acquisition methods. Prostheses created through this method exhibited equivalent aesthetics to conventionally fabricated ones while reducing the skill dependency typically associated with prosthetic design and fabrication.
Conclusions: This method provides a promising approach for both temporary and definitive nasal prostheses, with the potential for remote prosthesis fabrication in areas lacking anaplastology care. While new skills are required for data acquisition and algorithm control, these technologies are increasingly accessible. Further clinical studies will help validate its effectiveness, and ongoing technological advancements may lead to even more advanced and skill-independent prosthesis fabrication methods in the future.
{"title":"Computer-aided design and fabrication of nasal prostheses: a semi-automated algorithm using statistical shape modeling.","authors":"T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu","doi":"10.1007/s11548-024-03206-y","DOIUrl":"10.1007/s11548-024-03206-y","url":null,"abstract":"<p><strong>Purpose: </strong>This research aimed to develop an innovative method for designing and fabricating nasal prostheses that reduces anaplastologist expertise dependency while maintaining quality and appearance, allowing patients to regain their normal facial appearance.</p><p><strong>Methods: </strong>The method involved statistical shape modeling using a morphable face model and 3D data acquired through optical scanning or CT. An automated design process generated patient-specific fits and appearances using regular prosthesis materials and 3D printing of molds. Manual input was required for specific case-related details.</p><p><strong>Results: </strong>The developed method met all predefined requirements, replacing analog impression-making and offering compatibility with various data acquisition methods. Prostheses created through this method exhibited equivalent aesthetics to conventionally fabricated ones while reducing the skill dependency typically associated with prosthetic design and fabrication.</p><p><strong>Conclusions: </strong>This method provides a promising approach for both temporary and definitive nasal prostheses, with the potential for remote prosthesis fabrication in areas lacking anaplastology care. While new skills are required for data acquisition and algorithm control, these technologies are increasingly accessible. Further clinical studies will help validate its effectiveness, and ongoing technological advancements may lead to even more advanced and skill-independent prosthesis fabrication methods in the future.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2279-2285"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-27DOI: 10.1007/s11548-024-03175-2
Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori
Purpose: Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients.
Method: We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis.
Results: The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD.
Conclusions: We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.
{"title":"Preliminary study of substantia nigra analysis by tensorial feature extraction.","authors":"Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori","doi":"10.1007/s11548-024-03175-2","DOIUrl":"10.1007/s11548-024-03175-2","url":null,"abstract":"<p><strong>Purpose: </strong>Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients.</p><p><strong>Method: </strong>We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis.</p><p><strong>Results: </strong>The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD.</p><p><strong>Conclusions: </strong>We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2133-2142"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-02DOI: 10.1007/s11548-024-03187-y
Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud
Purpose: Currently, the intra-operative visualization of vessels during endovascular aneurysm repair (EVAR) relies on contrast-based imaging modalities. Moreover, traditional image fusion techniques lack a continuous and automatic update of the vessel configuration, which changes due to the insertion of stiff guidewires. The purpose of this work is to develop and evaluate a novel approach to improve image fusion, that takes into account the deformations, combining electromagnetic (EM) tracking technology and finite element modeling (FEM).
Methods: To assess whether EM tracking can improve the prediction of the numerical simulations, a patient-specific model of abdominal aorta was segmented and manufactured. A database of simulations with different insertion angles was created. Then, an ad hoc sensorized tool with three embedded EM sensors was designed, enabling tracking of the sensors' positions during the insertion phase. Finally, the corresponding cone beam computed tomography (CBCT) images were acquired and processed to obtain the ground truth aortic deformations of the manufactured model.
Results: Among the simulations in the database, the one minimizing the in silico versus in vitro discrepancy in terms of sensors' positions gave the most accurate aortic displacement results.
Conclusions: The proposed approach suggests that the EM tracking technology could be used not only to follow the tool, but also to minimize the error in the predicted aortic roadmap, thus paving the way for a safer EVAR navigation.
{"title":"Aortic roadmapping during EVAR: a combined FEM-EM tracking feasibility study.","authors":"Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud","doi":"10.1007/s11548-024-03187-y","DOIUrl":"10.1007/s11548-024-03187-y","url":null,"abstract":"<p><strong>Purpose: </strong>Currently, the intra-operative visualization of vessels during endovascular aneurysm repair (EVAR) relies on contrast-based imaging modalities. Moreover, traditional image fusion techniques lack a continuous and automatic update of the vessel configuration, which changes due to the insertion of stiff guidewires. The purpose of this work is to develop and evaluate a novel approach to improve image fusion, that takes into account the deformations, combining electromagnetic (EM) tracking technology and finite element modeling (FEM).</p><p><strong>Methods: </strong>To assess whether EM tracking can improve the prediction of the numerical simulations, a patient-specific model of abdominal aorta was segmented and manufactured. A database of simulations with different insertion angles was created. Then, an ad hoc sensorized tool with three embedded EM sensors was designed, enabling tracking of the sensors' positions during the insertion phase. Finally, the corresponding cone beam computed tomography (CBCT) images were acquired and processed to obtain the ground truth aortic deformations of the manufactured model.</p><p><strong>Results: </strong>Among the simulations in the database, the one minimizing the in silico versus in vitro discrepancy in terms of sensors' positions gave the most accurate aortic displacement results.</p><p><strong>Conclusions: </strong>The proposed approach suggests that the EM tracking technology could be used not only to follow the tool, but also to minimize the error in the predicted aortic roadmap, thus paving the way for a safer EVAR navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2239-2247"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141186916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.
Methods: We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.
Results: The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.
Conclusion: In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.
{"title":"An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.","authors":"Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto","doi":"10.1007/s11548-024-03074-6","DOIUrl":"10.1007/s11548-024-03074-6","url":null,"abstract":"<p><strong>Purpose: </strong>Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.</p><p><strong>Methods: </strong>We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.</p><p><strong>Results: </strong>The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.</p><p><strong>Conclusion: </strong>In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2195-2202"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139974449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-25DOI: 10.1007/s11548-024-03169-0
Keita Takeda, Tomoya Sakai, Eiji Mitate
To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.
{"title":"Background removal for debiasing computer-aided cytological diagnosis.","authors":"Keita Takeda, Tomoya Sakai, Eiji Mitate","doi":"10.1007/s11548-024-03169-0","DOIUrl":"10.1007/s11548-024-03169-0","url":null,"abstract":"<p><p>To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2165-2174"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples.
Methods: The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation.
Results: In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone.
Conclusion: Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.
目的:为训练自动分割中的深度学习模型而进行人工标注耗费大量时间。本研究介绍了一种混合表示增强采样策略,该策略在基于不确定性的贝叶斯主动学习(BAL)框架内整合了密度和多样性标准,通过选择信息量最大的训练样本来减少标注工作:实验在两个下肢数据集的核磁共振成像和 CT 图像上进行,重点是股骨、骨盆、骶骨、股四头肌、腘绳肌、内收肌、腓肠肌和髂腰肌的分割,并利用基于 U 网的 BAL 框架。我们的方法选择具有高密度和多样性的不确定样本进行人工修正,优化与未标记实例的最大相似性和与现有训练数据的最小相似性。我们分别使用骰子和一种称为降低标注成本(RAC)的拟议指标来评估准确性和效率。我们还进一步评估了各种采集规则对 BAL 性能的影响,并设计了一项消融研究来估算有效性:在 MRI 和 CT 数据集中,我们的方法优于或媲美现有的方法,在 CT 中实现了 0.8% 的骰子增加和 1.0% 的 RAC 增加(有统计学意义),在 MRI 中实现了 0.8% 的骰子增加和 1.1% 的 RAC 增加(无统计学意义)。我们的消融研究表明,与单独使用其中一种标准相比,结合密度和多样性标准可提高 BAL 在肌肉骨骼分割中的效率:结论:事实证明,我们的采样方法能有效降低图像分割任务中的注释成本。建议的方法与我们的 BAL 框架相结合,为高效注释医学图像数据集提供了一种半自动方法。
{"title":"Hybrid representation-enhanced sampling for Bayesian active learning in musculoskeletal segmentation of lower extremities.","authors":"Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato","doi":"10.1007/s11548-024-03065-7","DOIUrl":"10.1007/s11548-024-03065-7","url":null,"abstract":"<p><strong>Purpose: </strong>Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples.</p><p><strong>Methods: </strong>The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation.</p><p><strong>Results: </strong>In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone.</p><p><strong>Conclusion: </strong>Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2177-2186"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139571189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions.
Methods: To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN.
Results: The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance.
Conclusion: The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.
目的:人们利用深度学习对医学图像进行了大量分类研究。甲状腺组织图像也可按癌症类型进行分类。深度学习需要大量数据,但每个医疗机构都无法收集到足够数量的数据用于深度学习。在这种情况下,我们可以考虑将某个医疗机构训练的分类器在其他医疗机构重复使用,因为该医疗机构拥有足够数量的数据。但是,在使用多个机构的数据时,由于数据获取条件的不同,数据的特征也不尽相同,因此有必要统一特征分布:为了统一特征分布,使用半监督 CycleGAN 进行域转换,将来自 T 机构的数据转换为与来自 S 机构的数据分布更接近的数据。所提出的方法增强了 CycleGAN 的功能,考虑到了类的特征分布,从而为分类进行适当的域转换。此外,为了解决每种癌症类型的数据数量不同的不平衡数据问题,在半监督 CycleGAN 中应用了几种处理不平衡数据的方法:实验结果表明,当使用 S 机构的数据集作为训练数据,并对 T 机构的测试数据集进行域转换后进行分类时,分类性能得到了提高。此外,作为一种解决类不平衡的方法,焦点丢失对提高平均 F1 分数的贡献最大:结论:所提出的方法实现了甲状腺组织图像在两个域之间的域转换,保留了与跨域类别相关的重要特征,与其他方法相比,F1得分最高,差异显著。此外,通过解决数据集的类不平衡问题,所提出的方法得到了进一步增强。
{"title":"Domain transformation using semi-supervised CycleGAN for improving performance of classifying thyroid tissue images.","authors":"Yoshihito Ichiuji, Shingo Mabu, Satomi Hatta, Kunihiro Inai, Shohei Higuchi, Shoji Kido","doi":"10.1007/s11548-024-03061-x","DOIUrl":"10.1007/s11548-024-03061-x","url":null,"abstract":"<p><strong>Purpose: </strong>A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions.</p><p><strong>Methods: </strong>To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN.</p><p><strong>Results: </strong>The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance.</p><p><strong>Conclusion: </strong>The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2153-2163"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139492884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}